Windows 2012 Hosting - MVC 6 and SQL 2014 BLOG

Tutorial and Articles about Windows Hosting, SQL Hosting, MVC Hosting, and Silverlight Hosting

ASP.NET MVC 6 Hosting - ASPHostPortal :: Creating Custom Controller Factory ASP.NET MVC

clock February 20, 2016 00:01 by author Jervis

I was reading about “Control application behavior by using MVC extensibility points” which is one of the objectives for the 70-486 Microsoft certification, and it was not clear to me, the explanation provided. So I decided to write about it to make clear for me, and I hope this help you as well.

An ASP.NET MVC application contains the following class:

public class HomeController: Controller
{
   public HomeController(Ilogger logger)//notice the parameter in the constructor
   { 

   }
}

This throw an error with the DefaultControllerFactory see image below.

The application won’t be able to load the Home controller because it have a parameter. You need to ensure that Home Controller can be instantiated by the MVC framework. In order to accomplish this we are going to use dependency injection.

The solution is to create a custom controller factory.

It calls the default constructor of that class. To have the MVC framework create controller class instances for constructors that have parameters, you must use a custom controller factory. To accomplish this, you create a class that implements IControllerFactory and implement its methods. You then call the SetControllerFactory method of the ControllerBuilder class to an instance of your custom controller factory class.

Create the CustomControllerFactory that inherit from IControllerFactory:

public class CustomControllerFactory : IControllerFactory
   { 

       public CustomControllerFactory()
       {
       }  


       public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
       {
            ILogger logger = new DefaultLogger();
       var controller = new HomeController(logger);
       return controller;
       } 

       public System.Web.SessionState.SessionStateBehavior GetControllerSessionBehavior(System.Web.Routing.RequestContext requestContext, string controllerName)
       {
            return SessionStateBehavior.Default;
       } 

       public void ReleaseController(IController controller)
       {
           var disposable = controller as IDisposable;
           if (disposable != null)
           {
               disposable.Dispose();
           } 

       }
   }

You can implement the CreateController() method with a more generic way, using reflection.

public class CustomControllerFactory : IControllerFactory
{
    private readonly string _controllerNamespace;
    public CustomControllerFactory(string controllerNamespace)
    {
        _controllerNamespace = controllerNamespace;
    }
    public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
    {
        ILogger logger = new DefaultLogger();
        Type controllerType = Type.GetType(string.Concat(_controllerNamespace, ".", controllerName, "Controller"));
        IController controller = Activator.CreateInstance(controllerType, new[] { logger }) as Controller;
        return controller;
    }
}

Set your controller factory in Application_Start by using SetControllerFactory method:

protected void Application_Start()
       {
           AreaRegistration.RegisterAllAreas();
           GlobalConfiguration.Configure(WebApiConfig.Register);
           FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
           RouteConfig.RegisterRoutes(RouteTable.Routes);
           BundleConfig.RegisterBundles(BundleTable.Bundles); 

           ControllerBuilder.Current.SetControllerFactory(typeof(CustomControllerFactory));
       }

This could be one of the objective of the Microsoft certification exam 70-486, more specific for “Develop the user experience”, sub-objective “Control application behavior by using MVC extensibility points”.

Hope this helped you to understand how to do dependency injection in controllers with MVC.

 

 



ASP.NET MVC Hosting - ASPHostPortal :: Session State Performance Issue in ASP.NET MVC

clock January 15, 2016 20:54 by author Jervis

I cannot recall any real Web Application that doesn’t make use of the Session State feature, the one that is capable to store data that are available across multiple requests from the same browser. More over, in this very modern times, Web Applications tends to make extensive use of Ajax requests to the server, in order to avoid page refreshes. So the question here is, have you ever noticed performance issues while making multiple ajax requests to an ASP.NET MVC action when using Session data?

Put Session into the Context

Before demonstrating the real problem with coding, let’s find out the basics on how Session works. When a new request is arrived to the server for the first time, which means no session cookie is contained, the server will create a new session identifier accessible through the

System.Web.HttpContext.Current.Session.SessionID

Though, this does not mean that from now on all requests back to server will contain a session cookie. Instead, this will happened only if a specific request store some data in the Session. In other words,ASP.NET Framework adds the session cookie to the response at the first time some data is stored in the session. So how is this related to performance issues? Well, normally ASP.NET can process multiple requests from the same browser concurrently which means for example, multiple Ajax requests can be processed simultaneously.

The above schema works only when no session cookie is contained in browser’s request or in other words, server hasn’t yet stored any data into the Session. If server does store some data into the session and hence adds a session cookie in the response, then all subsequent requests using the same session cookie are queued and proccessed sequentially.

This is actually a normal behavior, cause consider for example that multiple requests may modify or read the same Session key value simultaneously. That would certainly result in inconsistent data.

Code time

Create an ASP.NET Empty Web Application checking the MVC option and add the following HomeController.

[OutputCache(NoStore = true, Duration = 0)]
    public class HomeController : Controller
    {
        public List<string> boxes = new List<string>() { "red", "green", "blue", "black", "gray", "yellow", "orange" };
        // GET: Home
        public ActionResult Index()
        {
            return View();
        } 

        public string GetBox()
        {
            System.Threading.Thread.Sleep(10);
            Random rnd = new Random();
            int index = rnd.Next(0, boxes.Count); 

            return boxes[index];
        } 

        public ActionResult StartSession()
        {
            System.Web.HttpContext.Current.Session["Name"] = "Jervis"; 

            return RedirectToAction("Index");
        }
    }

The controller has a GetBox which returns a random color from a list of colors defined in the class. We will use it’s value to add a div element in the view having the background-color property set accoarding the returned value. More over, has a StartSession action which simply stores a session value and hence it’s the point that the server adds a session cookie into the response. You will need to add the Index view so right click in the index method, add it and paste the following code.

<!DOCTYPE html>
<html lang="en" ng-app="asyncApp">
<head>
    <title>Troubleshooting ASP.NET MVC Performance Issues</title>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap.min.css">
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.2/css/bootstrap-theme.min.css">
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script>
    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.2/js/bootstrap.min.js"></script>
    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.js"></script>
    <link href="~/Content/Site.css" rel="stylesheet" />
</head>
<body ng-controller="asyncCtrl" ng-init="getBoxes()">
    <nav role="navigation" class="navbar navbar-default navbar-fixed-top">
        <div class="container-fluid">
            <!-- Brand and toggle get grouped for better mobile display -->
            <div class="navbar-header">
                <button type="button" data-target="#navbarCollapse" data-toggle="collapse" class="navbar-toggle">
                    <span class="sr-only">Toggle navigation</span>
                    <span class="icon-bar"></span>
                    <span class="icon-bar"></span>
                    <span class="icon-bar"></span>
                </button>
            </div>
            <!-- Collection of nav links and other content for toggling -->
            <div id="navbarCollapse" class="collapse navbar-collapse">
                <ul class="nav navbar-nav">
                    <li class="active"><a href="#">Performace testing</a></li>
                    <li>
                        @Html.ActionLink("Start Session", "StartSession")
                    </li>
                    <li>
                        <a class="links" ng-click="getBoxes()">Not resolved</a>
                    </li>
                    <li>
                        <a class="links" ng-click="getBoxes(true)">Resolved</a>
                    </li>
                    <li>
                        <form class="navbar-form">
                            <label class="checkbox" style="margin-top:5px">
                                @Html.CheckBox("isSessionNewChk", Session.IsNewSession, new { @disabled = "disabled" })
                                Is Session New
                            </label>
                        </form>
                    </li>
                </ul>
                <ul class="nav navbar-nav navbar-right">
                    <li><a href="#">{{boxes.length}} Boxes</a></li>
                </ul>
            </div>
        </div>
    </nav>
    <br /><br /><br />
    <div class="container">
        <div class="row">
            <div id="boxesContainer" ng-repeat="color in boxes track by $index">
                <div class="box" ng-class="color" />
            </div>
        </div>
        <br />
        <div class="row">
            <div id="timeOccured" ng-show="showResults" class="alert" ng-class="isResolved()" ng-bind="timeElapsed"></div>
        </div>
    </div>
    <script src="~/Scripts/app.js"></script>
</body>
</html>

Add a Content folder in the root of your application and create the following css stylesheet.

.box {
    height: 15px;
    width: 15px;
    margin: 2px;
    float:left;
} 

#timeOccured {
    clear:both;
} 

.links {
    cursor:pointer;
} 

.red {
    background-color: red;
} 

.green {
    background-color: green;
} 

.blue {
    background-color: blue;
} 

.black {
    background-color: black;
} 

.gray {
    background-color: gray;
} 

.yellow {
    background-color: yellow;
} 

.orange {
    background-color: orange;
}

You will have noticed that I make use of simple AngularJS code but that’s OK if you aren’t familiar with it. Add a Scripts folder and create the following app.js javascript file.

angular.module('asyncApp', [])
    .value('mvcuri', 'http://localhost:49588/home/getbox')
    .value('mvcurisessionresolved', 'http://localhost:49588/SessionResolved/getbox')
    .controller('asyncCtrl', function ($http, $scope, mvcuri, mvcurisessionresolved) { 

        $scope.boxes = [];
        $scope.showResults = false;
        var uri; 

        $scope.getBoxes = function (resolved) {
            var start = new Date();
            var counter = 300; 

            if (resolved)
                uri = mvcurisessionresolved;
            else
                uri = mvcuri; 

            // Init variables
            $scope.boxes = [];
            $scope.showResults = false;
            $scope.timeElapsed = ''; 

            for (var i = 0; i < 300; i++) {
                $http.get(uri)
                    .success(function (data, status, headers, config) {
                        $scope.boxes.push(data);
                        counter--; 

                        if (counter == 0) {
                            var time = new Date().getTime() - start.getTime();
                            $scope.timeElapsed = 'Time elapsed (ms): ' + time;
                            $scope.showResults = true;
                        }
                    })
                        .error(function (error) {
                            $scope.timeElapsed = error.Message;
                        }).finally(function () {
                        });
            }
        }; 

        $scope.isResolved = function () {
            return uri == mvcuri ? 'alert-danger' : 'alert-success';
        } 

    });

Make sure you replace the localhost:port with yours. Before explain all application’s functionality let’s add a new MVC controller named SessionResolvedController.

[OutputCache(NoStore = true, Duration = 0)]
    public class SessionResolvedController : Controller
    {
        public List<string> boxes = new List<string>() { "red", "green", "blue", "black", "gray", "yellow", "orange" }; 

        public string GetBox()
        {
            System.Threading.Thread.Sleep(10);
            Random rnd = new Random();
            int index = rnd.Next(0, boxes.Count); 

            return boxes[index];
        }
    }

At this point you should be able to fire your application.

So what is happening here? When the application starts, 300 Ajax calls are sent to the HomeController’sGetBox action and foreach color returned a respective box is added to the view.

<body ng-controller="asyncCtrl" ng-init="getBoxes()"> 

<div id="boxesContainer" ng-repeat="color in boxes track by $index">
     <div class="box" ng-class="color" />
 </div>

Every time you refresh the Page you will notice that the Is Session New checkbox is checked, which means that there isn’t yet a session cookie sent to the browser. All Ajax requests are processed concurrently as expected and the time elapsed to complete all the ajax calls is displayed in a red box under all boxes. The Start Session button will simply store a session value as follow.

System.Web.HttpContext.Current.Session["Name"] = "Jervis";

Press the button and notice the difference in the elapsed time for displaying all the 300 boxes. Also notice that the checkbox isn’t checked anymore.

Now that there is a Session cookie in each request to the server, all requests are proccessed sequentially as described previously. That’s why we need twice the time we needed before storing a session variable. You can repeat the same ajax requests without refreshing the page by clicking the Not Resolved button.

Solution

At the moment, if you press the Resolved button you will get the same results calling theSessionResolvedController’s GetBox() action.

<a class="links" ng-click="getBoxes(true)">Resolved</a>

Notice that we don’t read or modify any Session state data when calling the GetBox() actions, so the question is, is there anything that we can do to prevent our requests being processed sequentially despite the fact that they contain a cookie session? The answer is YES and it’s hidding in the SessionState attribute that can be applied in the controller’s class as follow:

[SessionState(SessionStateBehavior.Disabled)]
    public class SessionResolvedController : Controller
    {
        public List<string> boxes = new List<string>() { "red", "green", "blue", "black", "gray", "yellow", "orange" }; 

        public string GetBox()
        {
            System.Threading.Thread.Sleep(10);
            Random rnd = new Random();
            int index = rnd.Next(0, boxes.Count); 

            return boxes[index];
        }
    }

Applying this attribute to the controller will make all requests targeting this controller processed concurrently. Build, run your application and repeat what we did before. Notice that the Resolved button now finishes at the half time which was the time needed when no session cookie existed in the request.

I run the application without debugging in order to get better results. If you run with debugging you will get higher delays but that should not be a problem, the differences will be still noticeable.

And following is a complete demonstration of what we have done till now. When I fired the application for the first time, it tooked 928 milliseconds to complete all 300 Ajax requests. Then I clicked to create a Session cookie and that caused an important performance issue increasing the delay to 2201 milliseconds! When I requested the data through the SessionResolvedController the delay was almost the same when there wasn’t any Session cookie.

I have also used the

[OutputCache(NoStore = true, Duration = 0)]

to the MVC Controllers which will result to ask the browser (client) not to store the content retrieved from the server. This way the test is even more reliable.



SQL 2014 Hosting - ASPHostPortal :: Make Your SSAS Works Like a Private Jet!

clock December 10, 2015 19:54 by author Jervis

SQL Server Analysis Services (SSAS) Tabular is a popular choice as an analytical engine for many customers. With its state-of-the-art compression algorithms, multi-threaded query processor and in-memory capabilities, SSAS Tabular can provide super quick access to data by reporting client applications. However, as a consultant, I have been called by many clients to resolve slow query performance when accessing data from SSAS Tabular models. My experiences have taught me most, if not all, of the performance issues can be resolved by taking care of the following five subject areas. 

Estimate Current Size and Growth Carefully

Tabular models compress data really well and on an average, you can expect to see 10x the compression rates (though it can be much more or less depending on the cardinality of your data). However, when you are estimating the size of your model as well as future growth, a rough figure like this is not going to be optimal. If you already have data sitting in a data warehouse, import a subset of that data — say a month — to find the model size for that and then extrapolate the value based on the required number of rows / years as well as the number of columns. If not, try to at least get a subset of the data from the source to find the model size. There are tools like BISM Memory Report and Vertipaq Analyzer that can further help in this process.

It is also important to record the number of users who will be accessing the system currently as well as the estimated growth for the number of users.

Select or Upgrade Hardware Appropriately

Everyone knows SSAS Tabular is a memory intensive application, and one major issue I have seen is only the RAM is considered when hardware selections are made. However, just focusing on the RAM is not enough and there are a lot of other variables. Suppose all the other variables are constant and there is an unlimited budget, these are the recommendations:

CPU Speed – The faster, the better, will help in computing results faster especially when there is a bottleneck on the single-threaded formula engine.

CPU Cores – In theory, the more the better as it helps in managing concurrent user loads. However, in reality, a rise in the number of cores usually corresponds to a decrease in the CPU speed due to management overload. So a balanced approach has to be taken when determining the CPU Cores and Speed. Also, licensing cost increases with the number of cores for many software.

CPU Sockets – The lesser, the better as SSAS Tabular is not NUMA aware till SQL 2014. However, this is expected to change in SQL 2016 where some NUMA optimization has been made. For large tabular models, it might be a challenge to go single socket as the amount of RAM that can be supported on a system will depend on the CPU sockets.

CPU Cache – The more, the better. Retrieving data from CPU caches are 10-100x faster than retrieving data from RAM.

CPU Architecture – The newer, the better due to the hardware performance optimizations. For eg, Intel Xeon processors with Haswell architecture is always going to be faster than Sandy architecture keeping all other variables constant.

Amount of RAM – Should have at least 2.5x the model size, if the model is going to be processed on the same server. The amount of RAM can be lesser in cases of certain scale out architectures where the model is processed in a separate server.

RAM Speed – The faster, the better (yes, RAMs have speed too!) This is very important for a memory-bound application like Tabular and should always go for the faster speeds, if budget allows.

Storage – Not important at all as it does not have any effect on query performance. However, if budget allows, it might not be a bad idea to get faster storage like SSDs, as that will help in maintenance related activities like backup, storage or even getting the tabular model online faster when the service is restarted. Apart from this, there are other factors also like network latency, server architecture (scale out), etc that have to be considered, but depending on the budget and specific customer requirements, a balanced approach will have to be made.

Design the Data Model Properly

Tabular is really good at performance and in the case of small models, is extremely forgiving in terms of bad design. However, when the amount of data grows, performance problems begin to show up. In theory, you will get the best performance in SSAS tabular if the entire data is flattened into a single table. However, in reality, this would translate to an extremely bad user experience as well as a lengthy and expensive ETL process. So the best practice is to have a star schema, generally. Also, it is recommended to only include the relevant columns from the source tables, as increasing the columns will result in an increase in model size which in turn will result in slower query performances. Increase in number of rows might still be ok as long as the cardinality of the columns don’t change much.

Depending on the specific customer requirements, there could be deviations from the best practices. For e.g., we built custom aggregate tables along with the detailed fact table in the case of a very large production model for a client. The resultant measure had a conditional statement to retrieve data from the aggregate table if the detailed level dimension data was not used in the report. Since the aggregate table was only 1/10 the size of the detailed fact table, the query came out 10x times faster whenever the details were not used, which was almost 90% of the times.

Optimize the DAX Calculations

In case of small models, Tabular is extremely forgiving in terms of bad DAX code also. However, just like in the case of bad design, performance takes a hit for the worse as you increase the data, add more users, or run complex queries. DAX performance tuning is the most difficult to tune from the current list, and it is important to have a strategy for maintaining and tuning the performance. A good place to start would be the Performance Tuning of Tabular models in SSAS 2012 whitepaper.

Monitor User Query Patterns and Train Users

Once your model is in production, it is important to keep monitoring the user query patterns as well as the resources to see potential bottlenecks. Through this, you can find whether the performance issues are being caused due to inefficient DAX, bad design, insufficient resources or most importantly, whether it is just because a user is using the model inefficiently. For e.g., in one of the cases, we found out the slow performance for all users was due to a single user dumping the entire 100 GB model into spreadsheets so he could perform custom calculations on top of it. This blocked the queries for all the other users and made things really slow for them. With appropriate requirement gatherings, we ensured all the required calculations for that user were there in the model and then trained the user to use the model for his analytics.

The success of any tabular project depends on the adoption by the end users and it is needless to say the adoption would be much better if the system is fast. These 5 tips will ensure you already have a jumpstart on that journey.

Looking to Use SQL Server Analysis Services Hosting?

As this is intermediate service, you can get this SQL Server Analysis Services on our Windows Dedicated server plan. You can start from our Bundled Cloud Platinum Class to get this service running on your server. This plan has included

- Windows Server 2008/2012 license
- 2 x 2.0 Ghz Core
- 4 GB RAM --> FREE Upgrade to 8 GB RAM (Use Promo Code ‘DOUBLERAM’)
- 2 x 500 GB Storage
- 20000 GB Bandwith
- 1000 Mbps Connection speed
- 4 Static IP
- Full 24 x 7 RDP Access
- Full 24/7 Firewall Protection
- Standard ver. SQL 2008/2012
- Support MySQL db
- PLESK Control Panel
- Antivirus
- Unlimited of MSSQL dbs
- Unlimited of MySQL dbs
- FREE SmarterMail service if you register this December

Get a HUGE discount for this Christmas!! For more information, please visit our official site at http://www.asphostportal.com.



ASP.NET MVC 6 Hosting - ASPHostPortal :: Introduction Tag Helpers MVC 6

clock December 3, 2015 21:31 by author Jervis

MVC6 introduces a new feature called Tag Helpers. In this post, we will explore how tag helpers can be used to improve the readability of your Razor views that generate HTML forms.

How do Tag Helpers work?

Tag Helpers are an alternative to HTML helpers for generating HTML. The easiest way to show this is with an example.

Let’s start by looking at an extremely simple example of a login view that is bound to a LoginViewModel that contains a UserName and a Password:

public class LoginViewModel
{
    public string UserName { get; set; }
    public string Password { get; set; }
}

Here is how we would generate an HTML input for the UserName property using an HTML Helper and a Tag Helper.

<!--Create an input for UserName using Html Helper-->
@Html.EditorFor(l => l.UserName)
<!--Create an input for UserName using Tag Helper-->
<input asp-for="UserName" />

With the HTML helper, we call C# code that return some HTML. With Tag Helpers we augment some HTML with special tag helper attributes. These special attributes are processed by MVC which will generate some HTML. Both of these approaches above will generate an input that looks like this:

<input name="UserName" class="text-box single-line" id="UserName" type="text" value="">

Why is this better?

At first glance, it might look like Tag Helpers are just a syntax change with no obvious benefits. The difference however, can make your Razor forms much more readable. Let’s say we wanted to do something simple like add a class to our UserName input:

<!--Create an input with additional class for UserName using Html Helper-->
@Html.EditorFor(l => l.UserName, new { htmlAttributes = new { @class = "form-control" } })
<!--Create an input with additional class for UserName using Tag Helper-->
<input asp-for="UserName" class="form-control" />

As you can see, the HTML helper approach becomes very hard to understand while the tag helper approach is very clear and concise.

Here is a full blown example of a login form using HTML helpers:

@using (Html.BeginForm("Login", "Account", new { ReturnUrl = ViewBag.ReturnUrl }, 
FormMethod.Post, new { role = "form" }))
{
    @Html.AntiForgeryToken()
    @Html.ValidationSummary(true, "", new { @class = "text-danger" })
 
    <div class="form-group">
        <div class="row">
            @Html.LabelFor(m => m.UserName, new { @class = "col-md-2 control-label" })
            <div class="col-md-10">
                @Html.TextBoxFor(m => m.UserName, new { @class = "form-control" })
                @Html.ValidationMessageFor(m => m.UserName, "", new { @class = "text-danger" })
            </div>
        </div>
    </div>
    <div class="form-group">
        <div class="row">
            @Html.LabelFor(m => m.Password, new { @class = "col-md-2 control-label" })
            <div class="col-md-10">
                @Html.PasswordFor(m => m.Password, new { @class = "form-control" })
                @Html.ValidationMessageFor(m => m.Password, "", new { @class = "text-danger" })
            </div>
        </div>
    </div>
 
    <div class="form-group">
        <div class="row">
            <div class="col-md-offset-2 col-md-2">
                <input type="submit" value="Log in" class="btn btn-primary" />
            </div>
        </div>
    </div>
}

And now the same form using tag helpers:

<form asp-controller="Account" asp-action="Login" asp-route-returnurl="@ViewBag.ReturnUrl" 
method="post" class="form-horizontal" role="form">
    <div asp-validation-summary="ValidationSummary.All" class="text-danger"></div>
    <div class="form-group">
        <label asp-for="UserName" class="col-md-2 control-label"></label>
        <div class="col-md-10">
            <input asp-for="UserName" class="form-control" />
            <span asp-validation-for="UserName" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <label asp-for="Password" class="col-md-2 control-label"></label>
        <div class="col-md-10">
            <input asp-for="Password" class="form-control" />
            <span asp-validation-for="Password" class="text-danger"></span>
        </div>
    </div>
    <div class="form-group">
        <div class="col-md-offset-2 col-md-10">
            <input type="submit" value="Log in" class="btn btn-default" />
        </div>
    </div>
</form>

Overall, the tag helper version is much more readable. I especially like that we no longer need to use a using statement to generate a form element. That had always felt like a bit of a hack to me.

Another nice thing with the form tag helpers is that we don’t need to remember to explicitly add the AntiForgeryToken. The form tag helper does this automatically unless we explicitly turn it off using asp-anti-forgery=”false”

Of course, Visual Studio does a good job of highlighting the tag helper attributes so it is easy to distinguish them from regular HTML attributes

We also get full Intellisense inside the asp-for attributes.

How to enable Tag Helpers

The MVC Tag Helpers are located in the Microsoft.AspNet.Mvc.TagHelpers package so you will need to add a reference to that in your project.json file. Once you have added the reference, you can enable tag helpers in all your views by adding the following code to_GlobalImports.cshtml.

@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"



ASP.NET MVC Hosting - ASPHostPortal :: The Difference Between Controller and View in ASP.NET MVC

clock November 12, 2015 20:28 by author Jervis

One of the basic rules of MVC is that views should be only – exactly – views, that is to say: objects that present to the user something that is already “worked and calculated”.

They should perform little, if not none at all, calculation. All the significant code should be in the controllers. This allows better testability and maintainability.

Is this, in Microsoft’s interpretation of MVC, also justified by performance?

We tested this with a very simple code that does this:

– creates 200000 “cat” objects and adds them to a List

– creates 200000 “owner” objects and adds them to a List

– creates 200000 “catowner” objects (the MTM relation among cats and owners) and adds them to a List

– navigates through each cat, finds his/her owner, removes the owner from the list of owners (we don’t know if cats really wanted this, but their freedom on code fits our purposes).

We’ve run this code in a controller and in a razor view.

The result seem to suggest that the code in views runs just as fast as in controllers even if don’t pre-compile views (the compilation time in our test is negligible).

The average result for the code with the logic in the controller is 18.261 seconds.

The average result for the code with the logic in the view is 18.621 seconds.

The performance seems therefore very similar.

Here is how we got to this result.

Case 1: Calculations are in the CONTROLLER

Models:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

namespace WebPageTest.Models
{
public class Owner
{
public string Name { get; set; }
public DateTime DOB { get; set; }
public virtual CatOwner CatOwner { get; set; }
}
public class Cat
{
public string Name { get; set; }
public DateTime DOB { get; set; }
public virtual CatOwner CatOwner { get; set; }
}
public class CatOwner
{
public virtual Cat Cat { get; set; }
public virtual Owner Owner { get; set; }
}
}

Controller:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using WebPageTest.Models;

namespace WebPageTest.Controllers
{
public class HomeController : Controller
{
public ActionResult Index()
{
Stopwatch howLongWillItTake = new Stopwatch();
howLongWillItTake.Start();
List<Owner> allOwners = new List<Owner>();
List<Cat> allCats = new List<Cat>();
List<CatOwner> allCatOwners = new List<CatOwner>();
// create lists with 200000 cats, 200000 owners, 200000 relations
for (int i = 0; i < 200000; i++)
{
//Cat
Cat CatX = new Cat();
CatX.Name = “Cat ” + i.ToString();
CatX.DOB = DateTime.Now.AddDays(i / 10);
//Owner
Owner OwnerX = new Owner();
OwnerX.Name = “Owner ” + i.ToString();
OwnerX.DOB = DateTime.Now.AddDays(-i / 10);
//Relationship “table”
CatOwner CatOwnerXX = new CatOwner();
CatOwnerXX.Cat = CatX;
// Relations
CatOwnerXX.Owner = OwnerX;
CatX.CatOwner = CatOwnerXX;
OwnerX.CatOwner = CatOwnerXX;
//add to list
allCats.Add(CatX);
allOwners.Add(OwnerX);
allCatOwners.Add(CatOwnerXX);
}
// now I remove all the items
foreach (Cat CatToDelete in allCats)
{
Owner OwnerToRemove = CatToDelete.CatOwner.Owner;
allOwners.Remove(OwnerToRemove);
}
// now all cats are free
int numberOfCats = allCats.Count();
int numberOfOwners = allOwners.Count();
howLongWillItTake.Stop();
long elapsedTime = howLongWillItTake.ElapsedMilliseconds;
// give info to the view
ViewBag.numberOfCats = numberOfCats;
ViewBag.numberOfOwners = numberOfOwners;
ViewBag.elapsedTime = elapsedTime;
return View();
}
}
}

View:

<div class=”row”>
<div class=”col-md-12″>
<hr />
<b>Results</b>
<br/>
Cats: @ViewBag.numberOfCats
<br/>
Owners: @ViewBag.numberOfOwners
<br/>
ElapsedTime in milliseconds: @ViewBag.ElapsedTime
<hr />
</div>
</div>

Case 2: Calculations are in the VIEW (pre-compiled)

Models: same as above

Controller:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;

namespace WebPageTest.Controllers
{
public class HomeBisController : Controller
{
public ActionResult Index()
{
return View();
}
}
}

View:

@using System;
@using System.Collections.Generic;
@using System.Diagnostics;
@using System.Linq;
@using System.Web;
@using WebPageTest.Models;
@using System.Web.Mvc;
@{
Stopwatch howLongWillItTake = new Stopwatch();
howLongWillItTake.Start();
List<Owner> allOwners = new List<Owner>();
List<Cat> allCats = new List<Cat>();
List<CatOwner> allCatOwners = new List<CatOwner>();
//create lists with 200000 cats, 200000 owners, 200000 relations
for (int i = 0; i < 200000; i++)
{
//Cat
Cat CatX = new Cat();
CatX.Name = “Cat ” + i.ToString();
CatX.DOB = DateTime.Now.AddDays(i / 10);
//Owner
Owner OwnerX = new Owner();
OwnerX.Name = “Owner ” + i.ToString();
OwnerX.DOB = DateTime.Now.AddDays(-i / 10);
//Relationship “table”
CatOwner CatOwnerXX = new CatOwner();
CatOwnerXX.Cat = CatX;
// Relations
CatOwnerXX.Owner = OwnerX;
CatX.CatOwner = CatOwnerXX;
OwnerX.CatOwner = CatOwnerXX;
//add to list
allCats.Add(CatX);
allOwners.Add(OwnerX);
allCatOwners.Add(CatOwnerXX);
}
// now I remove all the items
foreach (Cat CatToDelete in allCats)
{
Owner OwnerToRemove = CatToDelete.CatOwner.Owner;
allOwners.Remove(OwnerToRemove);
}
// now all cats are free
int numberOfCats = allCats.Count();
int numberOfOwners = allOwners.Count();
howLongWillItTake.Stop();
long elapsedTime = howLongWillItTake.ElapsedMilliseconds;
// give info to the view

}
<div class=”row”>
<div class=”col-md-12″>
<hr />
<b>Results</b>
<br />
Cats: @numberOfCats
<br />
Owners: @numberOfOwners
<br />
ElapsedTime in milliseconds: @elapsedTime
<hr />
</div>
</div>

 



SQL 2014 Hosting - ASPHostPortal :: How to Optimize Your SQL Query

clock October 6, 2015 08:59 by author Jervis

Modern day software applications have millions of concurrent users. Development of an efficiently serviceable application requires a huge amount of effort and requires many tools and techniques. Software developers always try to improve the performance of the application by improving design, coding and database development. For database development, query optimization and evaluation techniques are playing vital parts.

Selection of required field only.

It is very important to avoid unnecessary data selection of the query. We should select a data field that we need but not all fields of the table.

SELECT login_id, pawwsord FROM tbluser  

Index

Properly created Indexes help to optimize search results. You need to better understand the databases before the selection of a better performing index. The selection of a highly used field as an index is very important.

CREATE clustered INDEX ind_login_id ON tbluser(login_id)  

Primary Key

The Primary Key is the most important index of the table. The most important thing about a Primary Key is the selection of a short and unique field. This will lead to easy access to the data records.

CREATE TABLE tbluser(
  id INT,  
  name VARCHAR(150),  
  email VARCHAR(100),  
  login_id VARCHAR(100),  
  password VARCHAR(10),  
  primary_key(id)  
)

Index unique column

The indexing of a unique column will improve searching and increase the efficiency of the database. You must have a better understanding of the data field and their utilization before indexing a unique column. The indexing of a less used column does not help improve the efficiency of the database.

CREATE INDEX ind_email ON tbluser(email)  

Select limited records

None of the user interfaces can visualize thousands of records at once. Hence there is no way to select all the records at once, so always limit the selection when you have a large number of records. Select the required data only.

SELECT id, name, email, login_id,password FROM tbluser WHERE 1 limite 10  

Selection of correct data type and length

Use the most appropriate data type and correct length of the data. The bad selection of a data type will produce bulky databases and poor performance. This will improve resource utilization of the database server.

CREATE TABLE tbluser(id INT,  
   name VARCHAR(150),  
   email VARCHAR(100),  
   login_id VARCHAR(100),  
   password VARCHAR(10)  
)  

Avoid in sub query

Always avoid use of IN sub-queries in your applications. An In sub-query will evaluate all the records of table A with table B (product of records) before selecting the required data.

SELECT login_id,name, email FROM tbluser WHERE login_id IN ( SELECT login_id FROM tbllogin_details)

One of the correct ways is to use an inner join as in the following:  

SELECT login_id,name, email FROM tbluser INNER JOIN tbllogin_details ON tbluser.login_id =tbllogin_details.login_id 

Avoid NOT operator

Please avoid the usage of the NOT operator situation that the number of qualifying records are lower than unqualified records. Always use a positive operator such as LIKE, EXIST than NOT LIKE, NOT EXIST.

SELECT * FROM tbluser WHERE email NOT LIKE '%gmail%'  

The prefered way is:

SELECT * FROM tbluser WHERE email LIKE '%yahoo%'  



ASP.NET MVC 6 Hosting - ASPHostPorta :: Introduction ASP.NET vNext and Features

clock September 9, 2015 08:15 by author Jervis

Microsoft has announced to launch ASP.NET vNext, which includes MVC, Web API, and Web Pages frameworks will be merged into one framework, called MVC 6. This new framework removes a lot of overlap between the existing MVC and Web API frameworks. It uses a common set of abstractions for routing, action selection, filters, model binding, and so on. You can use the framework to create both UI (HTML) and web APIs.

ASP.NET vNext has been designed to provide you the facilities to create .NET stack for building modern cloud-based apps. ASP.NET vNext will build on .NET vNext.

Asp.NET vNext Main Features

ASP.NET vNext Provides Cloud-Optimized

This feature is really very helpful for developers. ASP.NET vNext can target .NET vNext (with cloud optimized mode). This means that services such as session state and caching can be replaced based on whether the app is running in the cloud or in a traditional hosting environment.

Version of .NET Framework Side by side support

When targeting the Cloud optimized mode in .NET vNext, ASP.NET vNext will let you deploy any other version of .NET Framework. Since now the .NET vNext Framework can be deployed with the app, each app can run different versions of .NET vNext side-by-side and upgrade separately, all on the same machine.

More Enhancement for developer

In ASP.NET vNext, you can now edit your code files and refresh the browser to see the changes without explicitly building your app. You can also edit your application outside of Visual Studio.

ASP.NET vNext projects have project.json file where all the project dependencies are stored. This makes it easier to open vNext projects outside of Visual Studio so that you can edit them using any editor such as Notepad etc. You can even edit ASP.NET vNext projects in the cloud.

Helps in building Web sites and services both

MVC and Web API have been merged into a single programming model. For example, there’s now unified controller, routing and model binding concepts between them. You can now have a single controller that returns both MVC views and formatted Web API responses, on the same HTTP verb.

Modular Stack

ASP.NET vNext will ship as NuGet packages. NuGet packages will also be the unit of reference in your application. NuGet packages and libraries references will be treated the same so it will be easier to manage the references in your project.

This makes it possible for an application developer to choose what functionality they want to bring into their application. In the previous versions of ASP.NET features such as HttpContext, Session, Caching, and Membership were baked into the framework. As an app developer now if you do not need these features then you can choose not to bring it into your app.

Support for Dependency Injection

Dependency Injection is built into vNext and is consistent across the stack. All of the vNext components such as MVC, Web API, SignalR, EF and Identity will use the same DI. This will allow us to provide the right set of services based on the environment that you are running in.

New Configuration

There is a new configuration system which can read values from environment variables. This configuration provides a unified API system for accessing configuration values. You can use the same APIs to access configuration values locally or in Azure.

vNext is Open Source

The entire source code is already released as open source via the .NET Foundation. You can see the source at https://github.com/aspnet and follow progress on vNext in real time. You can also send pull requests and contribute to the source code.

vNext have Cross-platform support

We're developing vNext with cross-platform in mind, including an active collaboration with Xamarin to ensure that cloud-optimized .NET applications can run on Mac or Linux on top of the Mono runtime.

 



Entity Framework 7 Hosting - ASPHostPortal :: Introduction Entity Framework 7, ASP.NET 5 and ASP.NET MVC 6

clock August 20, 2015 08:44 by author Jervis

Introduction ASP.NET 5 MVC 6 Web API and Entity Framework 7

In this post, we will only show simple example using ASP.NET 5 and new Entity Framework 7. We will cover about:

- Using the ASP.NET 5 empty template to build the Web API from scratch.
- Overview of the new project structure in VS 2015 and how to use the new dependency management tool.
- Configuring ASP.NET 5 pipeline to add only the components needed for our Web API.
- Using EF 7 commands and the K Version Manager (KVM) to initialize and apply DB migrations.

First – Create an Empty Project

Open your Visual Studio 2015 and create new project named “Registration_MVC6WebApi”

Then, select template named “ASP.NET 5 Empty”

Second Step – Adding the Dependencies

Once you have created this project, you will see there is file named “project.json” and this file contains all your project settings along with a section for managing project dependencies on other frameworks/components.

We’ve used to manage packages/dependencies by using NuGet package manager, and you can do this with the new enhanced NuGet package manager tool which ships with VS 2015, but in our case we’ll add all the dependencies using the “project.json” file and benefit from the IntelliSense provided as the image below:

So we will add the dependencies needed to configure our Web API, so open file “project.json” and replace the section “dependencies” with the section below:

"dependencies": {
        "Microsoft.AspNet.Server.IIS": "1.0.0-beta1",
        "EntityFramework": "7.0.0-beta1",
        "EntityFramework.SqlServer": "7.0.0-beta1",
        "EntityFramework.Commands": "7.0.0-beta1",
        "Microsoft.AspNet.Mvc": "6.0.0-beta1",
        "Microsoft.AspNet.Diagnostics": "1.0.0-beta1",
        "Microsoft.Framework.ConfigurationModel.Json": "1.0.0-beta1"
    }

The use for each dependency we’ve added as the below:

  • Microsoft.AspNet.Server.IIS: We want to host our Web API using IIS, so this package is needed. If you are planning to self-host your Web API then no need to add this package.
  • EntityFramework & EntityFramework.SqlServer: Our data provider for the Web API will be SQL Server. Entity Framework 7 can be configured to work with different data providers and not only relational databases, the data providers supported by EF 7 are: SqlServer, SQLite, AzureTableStorage, and InMemory. More about EF 7 data providers here.
  • EntityFramework.Commands: This package will be used to make the DB migrations command available in our Web API project by using KVM, more about this later in the post.
  • Microsoft.AspNet.Mvc: This is the core package which adds all the needed components to run Web API and MVC.
  • Microsoft.AspNet.Diagnostics: Basically this package will be used to display a nice welcome page when you request the base URI for the API in a browser. You can ignore this if you want, but it will be nice to display welcoming page instead of the 403 page displayed for older Web API 2.
  • Microsoft.Framework.ConfigurationModel.Json: This package is responsible to load and read the configuration file named “config.json”. We’ll add this file in a later step. This file is responsible to setup the “IConfiguration” object. 

Last thing we need to add to the file “project.json” is a section named “commands” as the snippet below:

"commands": {
        "ef": "EntityFramework.Commands"
}

We’ve added short prefix “ef” for EntityFramework.Commands which will allow us to write EF commands such as initializing and applying DB migrations using KVM.

Third Step - Adding config.json configuration file

Now right click on your project and add new item of type “ASP.NET Configuration File” and name it “config.json”, you can think of this file as a replacement for the legacy Web.config file, for now this file will contain only our connection string to our SQL DB, I’m using SQL Express here and you can change this to your preferred SQL server.

{
    "Data": {
        "DefaultConnection": {
            "Connectionstring": "Data Source=.\\sqlexpress;Initial Catalog=RegistrationDB;Integrated Security=True;"
        }
    }
}

Step no. 4 – Configure ASP.NET 5 Pipeline

This is the class which is responsible for adding the components needed in our pipeline, currently with the ASP.NET 5 empty template, the class is empty and our web project literally does nothing, I’ll add all the code in our Startup class at once then describe what each line of code is responsible for, so open file Startup.cs and paste the code below:

using System;
using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Http;
using Microsoft.AspNet.Hosting;
using Microsoft.Framework.ConfigurationModel;
using Microsoft.Framework.DependencyInjection;
using Registration_MVC6WebApi.Models;

namespace Registration_MVC6WebApi
{
    public class Startup
    {
        public static IConfiguration Configuration { get; set; }

        public Startup(IHostingEnvironment env)
        {
            // Setup configuration sources.
            Configuration = new Configuration().AddJsonFile("config.json").AddEnvironmentVariables();
        }
        public void ConfigureServices(IServiceCollection services)
        {
            // Add EF services to the services container.
            services.AddEntityFramework().AddSqlServer().AddDbContext<RegistrationDbContext>();

            services.AddMvc();

            //Resolve dependency injection
            services.AddScoped<IRegistrationRepo, RegistrationRepo>();
            services.AddScoped<RegistrationDbContext, RegistrationDbContext>();
        }
        public void Configure(IApplicationBuilder app)
        {
            // For more information on how to configure your application, visit http://go.microsoft.com/fwlink/?LinkID=398940
            app.UseMvc();

            app.UseWelcomePage();

        }
    }
}

Step 5 – Adding Models, Database Context, and Repository

Now we’ll add a file named “Course” which contains two classes: “Course” and “CourseStatusModel”, those classes will represents our domain data model, so for better code organizing add new folder named “Models” then add the new file containing the the code below:

using System;
using System.ComponentModel.DataAnnotations;

namespace Registration_MVC6WebApi.Models
{
    public class Course
    {
        public int Id { get; set; }
        [Required]
        [StringLength(100, MinimumLength = 5)]
        public string Name { get; set; }
        public int Credits { get; set; }
    }

    public class CourseStatusModel
    {
        public int Id { get; set; }
        public string Description { get; set; }
    }
}

Now we need to add Database context class which will be responsible to communicate with our database, so add new class and name it “RegistrationDbContext” then paste the code snippet below:

using Microsoft.Data.Entity;
using System;
using Microsoft.Data.Entity.Metadata;

namespace Registration_MVC6WebApi.Models
{
    public class RegistrationDbContext :DbContext
    {
        public DbSet<Course> Courses { get; set; }

        protected override void OnConfiguring(DbContextOptions options)
        {
            options.UseSqlServer(Startup.Configuration.Get("Data:DefaultConnection:ConnectionString"));
        }
    }
}

Basically what we’ve implemented here is adding our Courses data model as DbSet so it will represent a database table once we run the migrations, note that there is a new method named “OnConfiguration” where we can override it so we’ll be able to specify the data provider which needs to work with our DB context.

In our case we’ll use SQL Server, the constructor for “UseSqlServer” extension method accepts a parameter of type connection string, so we’ll read it from our “config.json” file by specifying the key “Data:DefaultConnection:ConnectionString” for the “Configuration” object we’ve created earlier in Startup class.

Lastly we need to add the interface “IRegistrationRepo” and the implementation for this interface “RegistrationRepo”, so add two new files under “Models” folder named “IRegistrationRepo” and “RegistrationRepo” and paste the two code snippets below:

IRegistrationRepo Code

using System;
using System.Collections;
using System.Collections.Generic;

namespace Registration_MVC6WebApi.Models
{
    public interface IRegistrationRepo
    {
        IEnumerable<Course> GetCourses();
        Course GetCourse(int courseId);
        Course AddCourse(Course course);
        bool DeleteCourse(int courseId);
    }
}

RegistrationRepo implementation

using System;
using System.Collections.Generic;
using System.Linq; 


namespace Registration_MVC6WebApi.Models
{
    public class RegistrationRepo : IRegistrationRepo
    {
        private readonly RegistrationDbContext _db;

        public RegistrationRepo(RegistrationDbContext db)
        {
            _db = db;
        }
        public Course AddCourse(Course course)
        {
            _db.Courses.Add(course);

            if (_db.SaveChanges() > 0)
            {
                return course;
            }
            return null;

        }

        public bool DeleteCourse(int courseId)
        {
            var course = _db.Courses.FirstOrDefault(c => c.Id == courseId);
            if (course != null)
            {
                _db.Courses.Remove(course);
                return _db.SaveChanges() > 0;
            }
            return false;
        }

        public Course GetCourse(int courseId)
        {
            return _db.Courses.FirstOrDefault(c => c.Id == courseId);
        }

        public IEnumerable<Course> GetCourses()
        {
            return _db.Courses.AsEnumerable();
        }
    }
}

The implementation here is fairly simple, what worth noting here is how we’ve passed “RegistrationDbContext” as parameter for our “RegistrationRepo” constructor so we’ve implementedConstructor Injection, this will not work if we didn’t configure this earlier in our “Startup” class.

Step 6 - Installing KVM (K Version Manager)

After we’ve added our Database context and our domain data models, we can use migrations to create the database, with previous version of ASP.NET we’ve used NuGet package manager for these type of tasks, but with ASP.NET 5 we can use command prompt using various K* commands.

What is KVM (K Version Manager)?  KVM is a Powershell script used to get the runtime and manage multiple versions of it being on the machine at the same time, you can read more about it here.

Now to install KVM for the first time you have to do the following steps:

1. Open a command prompt with Run as administrator.
2. Run the following command:

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/master/kvminstall.ps1'))"

3. The script installs KVM for the current user.
4. Exit the command prompt window and start another as an administrator (you need to start a new command prompt to get the updated path environment).
5. Upgrade KVM with the following command:

KVM upgrade

Next Step - Initializing and applying migrations Database

Now our command prompt is ready to understand K commands and Entity Framework commands, first step to do is to change the directory to the project directory. The project directory contains the “project.json” file as the image below:

So in the command prompt we need to run the 2 following commands:

k ef migration add initial
k ef migration apply

Basically the first command will add migration file with the name format (<date>_<migration name>) so we’ll end up having file named “201411172303154_initial.cs” under folder named “Migrations” in our project. This auto generated file contains the code needed to to add our Courses table to our database, the newly generated files will show up under your project as the image below:

The second command will apply those migrations and create the database for us based on the connection string we’ve specified earlier in file “config.json”.

Note: the “ef” command comes from the settings that we’ve specified earlier in file “project.json” under section “commands”.

Step 8 - Adding GET methods for Courses Controller

The controller is a class which is responsible to handle HTTP requests, with ASP.NET 5 our Web API controller will inherit from “Controller” class not anymore from “ApiController”, so add new folder named “Controllers” then add new controller named “CoursesController” and paste the code below:

using Microsoft.AspNet.Mvc;
using Registration_MVC6WebApi.Models;
using System;
using System.Collections.Generic;

namespace Registration_MVC6WebApi.Controllers
{
    [Route("api/[controller]")]
    public class CoursesController : Controller
    {
        private IRegistrationRepo _registrationRepo;

        public CoursesController(IRegistrationRepo registrationRepo)
        {
            _registrationRepo = registrationRepo;
        }

        [HttpGet]
        public IEnumerable<Course> GetAllCourses()
        {
            return _registrationRepo.GetCourses();
        }

        [HttpGet("{courseId:int}", Name = "GetCourseById")]
        public IActionResult GetCourseById(int courseId)
        {
            var course = _registrationRepo.GetCourse(courseId);
            if (course == null)
            {
                return HttpNotFound();
            }

            return new ObjectResult(course);
        }

    }
}

What we’ve implemented in the controller class is the following:

1. The controller is attribute with Route attribute as the following [Route("api/[controller]")] so any HTTP requests that match the template are routed to the controller. The “[controller]” part in the template URL means to substitute the controller class name, minus the “Controller” suffix. In our case and for “CoursesController” class, the route template is “api/courses”.

2. We’ve defined two HTTP GET methods, the first one “GetAllCourses” is attributed with “[HttpGet]” and it returns a .NET object which is serialized in the body of the response using the default JSON format.

3. The second HTTP GET method “GetCourseById” is attributed with “[HttpGet]“. For this method we’ve specified a constraint on the parameter “courseId”, the parameter should be of integer data type. As well we’ve specified a name for this method “GetCourseById” which we’ll use in the next step. Last thing this method returns object of type IActionResult which gives us flexibility to return different actions results based on our logic, in our case we will return HttpNotFound if the course does not exist or we can return serialized JSON object of the course when the course is found.

4. Lastly notice how we are passing the “IRegistrationRepo” as a constructor for our CoursesController, by doing this we are implementing Constructor Injection.

Last Step - Adding POST and DELETE methods for Courses Controller

Now we want to implement another two HTTP methods which allow us to add new Course or delete existing one, so open file “CoursesController” and paste the code below:

[HttpPost]
public IActionResult AddCourse([FromBody] Course course)
{
                if (!ModelState.IsValid)
                {
                                Context.Response.StatusCode = 400;
                                return new ObjectResult(new CourseStatusModel { Id = 1 , Description= "Course model is invalid" });
                }
                else
                {
                   var addedCourse =  _registrationRepo.AddCourse(course);

                                if (addedCourse != null)
                                {
                                                string url = Url.RouteUrl("GetCourseById", new { courseId = course.Id }, Request.Scheme, Request.Host.ToUriComponent());

                                                Context.Response.StatusCode = 201;
                                                Context.Response.Headers["Location"] = url;
                                                return new ObjectResult(addedCourse);
                                }
                                else
                                {
                                                Context.Response.StatusCode = 400;
                                                return new ObjectResult(new CourseStatusModel { Id = 2, Description = "Failed to save course" });
                                }                
                }

}



SQL Hosting with ASPHostPortal :: Using SQLBulkCopy and C# to Upload File

clock August 12, 2015 08:17 by author Jervis

In this article I am going to write about SQLBulkCopy and its major properties and methods. This article will give you the code for high performance transfer of rows from XML file to SQL server with SQLBulkCopy and C#.

SQLBulkCopy introduced as part of .Net framework 2.0. It is simple and easy tool to transfer complicated or simple data from one data source to other. You can read data from any data source as long as that data can be load to DataTable or read by IDataReader and transfer the data with high performance to SQL Server using SQLBulkCopy.

In real time applications every day millions of records get transferred from one data store to other. There are multiple ways to transfer the data like command prompt bcp utility of SQL Server, creating INSERT statements, creating SSIS packages and SQLBulkCopy. SQLBulkCopy gives you significant performance gain over other tools.

SQLBulkCopy Constructor

SQLBulkCopy initializes instance in four different way.

1. Accepts already open SqlConnection for destination.
2. Accepts connection string of SQLConnection. This constructor actually opens and initializes new instance of SQLConnection for destination.
3. Accepts connection string of SQLconnection and enum value of SqlBulkCopyOptions. This constructor actually opens and initializes new instance of SQLConnection for destination.
4. Accepts already opened SQLConnection and enum value of SqlBulkCopyOptions.

SqlBulkCopy bulkCopy =
            new SqlBulkCopy(destinationConnection.ConnectionString, 
                SqlBulkCopyOptions.TableLock))

BatchSize

SQLBulkCopy BatchSize is integer property with default value of 0. It decides how many rows need to be send to the server in one batch. If you do not set any value for this property or set it as 0, all the records will be send in single batch.

Following example sets BatchSize property as 50.

bulkCopy.BatchSize = 50;

ColumnMappings

SQLBulkCopy ColumnMappings is a collection of columns which needs to be map from source table to destination table's columns. You do not need to map the columns if column names are same. However it is very important to map the columns if column names are different. If matching SQLBulkCopy does not found the matching column it throws System.InvalidOperationException.

You can map the columns in different ways, giving both column names is easy and readable method.

Below code match the column OrderID from source table with columnNewOrderID of destination column.

bulkCopy.ColumnMappings.Add("OrderID", "NewOrderID");  

Data Type issue while mapping the column

SqlBulkCopy is particular about matching column DataType. Both the columns has to be of same DataType. If you have nullable columns, you explicitly have to convert such columns into desired DataType.

Below code converts Null to varchar(2) and can be mapped to any varchar(2) column of destination table.

SELECT  CAST(ISNULL(ShipRegion,'') as varchar(2))
            as ShipRegion FROM Orders

Quick note: If you are having computed columns like SUM, AVG etc. make sure it returns in expected DataType. If your destination table expects columns with decimal(15,7) you will have to explicitly convert the source column as decimal(15,7) because SUM will by default return decimal(38,7).

DestinationTableName

It sets the name of destination table. The method WriteToServer will copy the source rows to this particular table.

Below code will set the destination table as "TopOrders".

bulkCopy.DestinationTableName = "TopOrders";   

NotifyAfter and SqlRowsCopied

NotifyAfter is an integer property with default value of 0 and SqlRowsCopied is an event. The value of NotifyAfter indicates when to raise eventSqlRowsCopied.

The below code shows after processing 100 rows, event SqlRowsCopied will be executed.

bulkCopy.SqlRowsCopied +=
    new SqlRowsCopiedEventHandler(OnSqlRowsTransfer);
bulkCopy.NotifyAfter = 100;

private static void
    OnSqlRowsTransfer(object sender, SqlRowsCopiedEventArgs e)
{
        Console.WriteLine("Copied {0} so far...", e.RowsCopied);
}

WriteToServer

WriteToServer is a method which actually processes your source table data to destination table. It accepts array of DataRows or DataTable or IDataReader. With DataTable you can also specify the state of the rows that needs to be processed.

The following code will process rows from sourceData DataTable which has RowState as Added to DestinationTable.

bulkCopy.WriteToServer(sourceData, DataRowState.Added);



ASP.NET MVC 4 Hosting - ASPHostPortal :: How to Fix Could not load file or assembly DotNetOpenAuth.Core, Version=4.0.0.0

clock September 8, 2014 09:33 by author Jervis

We have seen that many of our clients experience this problem and we have also found this issue on many forums. So, we decide to write the solution on this tutorial and hope it will fix your problem.

Error Message

Could not load file or assembly 'DotNetOpenAuth.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

The Problem

The application was actually upgraded earlier from ASP.NET MVC 3, i.e. it was not autogenerated by Visual Studio 2012. The pre-binding info in the exception was:

=== Pre-bind state information ===
LOG: User = NT AUTHORITY\NETWORK SERVICE
LOG: DisplayName = DotNetOpenAuth.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246
 (Fully-specified)
LOG: Appbase = file:///
LOG: Initial PrivatePath = \bin
Calling assembly : Microsoft.Web.WebPages.OAuth, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35.
===
LOG: This bind starts in default load context.
LOG: Using application configuration file: \web.config
LOG: Using host configuration file: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet.config
LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\config\machine.config.
LOG: Post-policy reference: DotNetOpenAuth.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246
LOG: Attempting download of new URL file:////DotNetOpenAuth.Core.DLL.
LOG: Attempting download of new URL file:////DotNetOpenAuth.Core/DotNetOpenAuth.Core.DLL.
LOG: Attempting download of new URL file:////DotNetOpenAuth.Core.DLL.
WRN: Comparing the assembly name resulted in the mismatch: Minor Version
ERR: Failed to complete setup of assembly (hr = 0x80131040). Probing terminated.

The key pieces of information here are at lines 3 and 7. Basically, Microsoft.Web.WebPages.OAuth needs DotNetOpenAuth.Core 4.0.0.0. Please check your DotNetOpenAuth.Core version again. Basically the problem from the different version.

Solution

1. Add these lines under the <runtime>/<assemblyBinding> section of the root Web.config:

<dependentAssembly>
                <assemblyIdentity name="DotNetOpenAuth.AspNet" publicKeyToken="2780ccd10d57b246" culture="neutral" />
                <bindingRedirect oldVersion="0.0.0.0-4.3.0.0" newVersion="4.3.0.0" />
</dependentAssembly>
<dependentAssembly>
                <assemblyIdentity name="DotNetOpenAuth.Core" publicKeyToken="2780ccd10d57b246" culture="neutral" />
                <bindingRedirect oldVersion="0.0.0.0-4.3.0.0" newVersion="4.3.0.0" />
</dependentAssembly>

2. The above solution works for other packages that ASP.NET MVC 4 depends on. For example, if you upgrade WebGrease from 1.0.0.0 to 1.3.0.0, you have to add this to the <runtime>/<assemblyBinding> section:

<dependentAssembly>
                <assemblyIdentity name="WebGrease" publicKeyToken="31bf3856ad364e35" culture="neutral" />
                <bindingRedirect oldVersion="0.0.0.0-1.3.0.0" newVersion="1.3.0.0" />
</dependentAssembly>

 

 



About ASPHostPortal.com

We’re a company that works differently to most. Value is what we output and help our customers achieve, not how much money we put in the bank. It’s not because we are altruistic. It’s based on an even simpler principle. "Do good things, and good things will come to you".

Success for us is something that is continually experienced, not something that is reached. For us it is all about the experience – more than the journey. Life is a continual experience. We see the Internet as being an incredible amplifier to the experience of life for all of us. It can help humanity come together to explode in knowledge exploration and discussion. It is continual enlightenment of new ideas, experiences, and passions


Author Link


Corporate Address (Location)

ASPHostPortal
170 W 56th Street, Suite 121
New York, NY 10019
United States

Sign in