Windows 2012 Hosting - MVC 6 and SQL 2014 BLOG

Tutorial and Articles about Windows Hosting, SQL Hosting, MVC Hosting, and Silverlight Hosting

Crystal Report Hosting :: How to Fix Unable to Cast COM Object of Type ‘ReportSourceClass’ to Interface Type ‘ISCRReportSource’

clock March 20, 2019 08:31 by author Jervis

When creating or printing a report using Crystal Reports, including printing reports to printer or generating reports in Acrobat PDF format, the following error on Crystal Report Windows Forms Viewer may occur and no report is generated or created.

System.InvalidCastException: Unable to cast COM object of type 'CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass' to interface type 'CrystalDecisions.ReportAppServer.Controllers.ISCRReportSource'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{98CDE168-C1BF-4179-BE4C-F2CFA7CB8398}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).
   at System.StubHelpers.StubHelpers.GetCOMIPFromRCW(Object objSrc, IntPtr pCPCMD, IntPtr& ppTarget, Boolean& pfNeedsRelease)
   at CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass.Refresh()
   at CrystalDecisions.ReportSource.EromReportSourceBase.Refresh(RequestContext reqContext)
   at CrystalDecisions.CrystalReports.Engine.FormatEngine.Refresh(RequestContext reqContext)
   at CrystalDecisions.CrystalReports.Engine.ReportDocument.Refresh()
   at CrystalDecisions.CrystalReports.Engine.Table.SetDataSource(Object val, Type type)
   at CrystalDecisions.CrystalReports.Engine.ReportDocument.SetDataSourceInternal(Object val, Type type)
   at CrystalDecisions.CrystalReports.Engine.ReportDocument.SetDataSource(DataTable dataTable)
   at Portal_Inkaso.frIndex.OrderTT()
   at Portal_Inkaso.frIndex.Order1()
   at Portal_Inkaso.frIndex.llbOrder_LinkClicked(Object sender, LinkLabelLinkClickedEventArgs e)
   at System.Windows.Forms.LinkLabel.OnLinkClicked(LinkLabelLinkClickedEventArgs e)
   at System.Windows.Forms.LinkLabel.OnMouseUp(MouseEventArgs e)
   at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
   at System.Windows.Forms.Control.WndProc(Message& m)
   at System.Windows.Forms.Label.WndProc(Message& m)
   at System.Windows.Forms.LinkLabel.WndProc(Message& msg)
   at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
   at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
   at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

The error normally happens due to incompatibility between different version of Crystal Report or normally occurs after platform updates.

Thus, to resolve the “Unable to cast COM object of type ‘CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass’ to interface type ‘CrystalDecisions.ReportAppServer.Controllers.ISCRReportSource'” issue, make sure to remove all old Crystal Report assemblies from Reference list in all projects. Then add new Crystal Reports assemblies and rebuild the application. If you’re unable to rebuild the app, add the following code in app.config/web.config to make sure <dependentAssembly> workaround as suggested by SAP:

  <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <assemblyIdentity name="CrystalDecisions.CrystalReports.Engine" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportSource" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.Shared" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.Web" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.Windows.Forms" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.ClientDoc" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.CommonControls" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.CommonObjectModel" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.Controllers" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.CubeDefModel" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.DataDefModel" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.DataSetConversion" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.ObjectFactory" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.Prompting" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.ReportDefModel" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>
      <assemblyIdentity name="CrystalDecisions.ReportAppServer.XmlSerialize" publicKeyToken="692fbea5521e1304" culture="neutral"/>
      <bindingRedirect oldVersion="13.0.2000.0" newVersion="13.0.3500.0"/>

Replace the values for newVersion and oldVersion accordingly based on your deployment version.

If the Crystal Reports error only happens on some computers, but not on others, make to to update and deploy same versions of Crystal Reports to all workstations.

Best Crystal Report Hosting?

If you are looking for Crystal Report hosting solution, please kindly visit our site at 

NopCommerce 4.0 Hosting :: How to Fix Error Microsoft.AspNetCore.Hosting version 2.1.1-rtm-30846

clock March 13, 2019 13:10 by author Jervis

We have received feedback from one of customers when installed the nopCommerce. The following is the error message:

An error occurred while starting the application. .NET Core 4.6.26725.06 X64 v4.0.0.0 | Microsoft.AspNetCore.Hosting version 2.1.1-rtm-30846 |

When the error page is shown this means that there is some error and you need to know what the error is before fixing it. You have two options:

Open the administration by navigating to and then go to System -> Log.
The Log contains all the errors and warnings in your store. Most of the time you should be looking for the most recent error. The error details contain all the information you need to investigate and fix the problem.

If you can't access the administration you have to turn on your site log, so that your errors are logged in a .txt file. To do so, modify your Web.config file by setting stdoutLogEnabled to true:

  <aspNetCore requestTimeout="00:07:00" processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" forwardWindowsAuthToken="false"
stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" />
change to:

  <aspNetCore requestTimeout="00:07:00" processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" forwardWindowsAuthToken="false"
stdoutLogEnabled="true" stdoutLogFile=".\logs\stdout" />

Once you got the error you will get a log .txt file generated in ~/Logs folder (~/Presentation/Nop.Web/Logs for Source version of nopCommerce). 

Hope it help! In case you are looking for reliable NopCommerce hosting, please kindly give us a look.


ASP.NET Core Hosting :: Which is Better? ASP.NET Core Razor Pages or MVC?

clock March 4, 2019 10:50 by author Jervis

With the release of new ASP.NET Core 2 framework, Microsoft and its community has provided us with a brand new alternative for the MVC (Model-View-Controller) approach. Microsoft has named it Razor Pages, and while it’s a little bit different approach, but it’s still similar to MVC in some ways.

In this article, we are going to cover following important points of ASP.NET Razor Pages.

  • Razor Pages — what is it exactly?
  • Drawbacks of Using ASP.NET MVC
  • Advantages of Using Razor Pages
  • A Quick Comparison of How Requests Are Handled in Both

Razor Pages — What is It Exactly?

A Razor Page is very similar toASP.NET MVC’s view component. It has basically same syntax and functionality as MVC.

The key difference between Razor pages and MVC is that the model and controller code is also included within the Razor Page itself.

In simple terms, it is much like am MVVM (Model-View-View-Model) framework. It provides two-way data binding and a simpler development experience with isolated concerns.

Though MVC works fine with web apps that have large amount of dynamic server views, single page apps, REST APIs, and AJAX calls, but Razor Pages are perfect for simple pages that are read-only or do basic data input.

Now, the ASP.NET MVC has been extremely popular for web applications development, and it definitely has its benefits. In fact, the ASP.NET WebForms was specifically designed as an MVVM solution in MVC.

But, the new ASP.NET Core Razor Pages is the next evolution of ASP.NET WebForms.

Drawbacks of ASP.NET MVC

As most of you probably know, MVC stands for Model-View-Controller. It is an architectural pattern used in software development for implementing UI (user interfaces).

While MVC is one of the most popular framework and is being used by millions of web developers worldwide, but it still has its drawbacks. Let’s look at the two most important of them.

#1 — Complexity

In ASP.NET MVC, there are piles of concepts such as TempData, RouteCollection, ViewData, Linq to SQL, Controller Action, Lambda Expression, Custom Route, and HTML Helpers, all of which tie the Model, View, and Controller.

Now, you cannot build a web application using ASP.NET MVC until you learn all these basic concepts. Plus, even if you’ve learned them, you will still face complexity issues at times, especially when you’re building large-scale applications.

#2 — Cost of Frequent Updates

In ASP.NET MVC, web developers cannot completely ignore the view of the model even when both are separated. The reason is because when the model is changed frequently, the views of your application could be flooded with update requests.

Views are basically graphical displays which takes some time to render depending on the complexity of your application. And if your application is complex and the model has been changed a lot, then the view may fall behind update requests. So, the developers then need to spend extra time fixing this situation, resulting into higher costs.

Advantages of Using Razor Pages

We’ve been providing ASP.NET MVC development services for about 10 years now. In fact, we’re certified Microsoft Gold Partner. So, based on our knowledge, experience, and expertise, there are two main benefits of using ASP.NET Core razor pages instead of MVC.

#1 — Razor Pages is Better Organized

If you’ve ever used MVC for any kind of web development, then you probably know that how much time it takes to code an entire app. Creating dynamic routes, naming things properly, and hundred other stuff consumes a lot of time.

Razor Pages, on the other hand, is more organized compared to MVC.

In Razor Pages, the files are basically more organized. You have a Razor View and the entire code behind a file, same way the old ASP.NET WebForms did.

#2 — Single Responsibility

Again, if you have ever used an MVC framework before, you have probably seen that there are some huge controller classes which are generally filled with a lot of various actions. These classes are like a virus which grows bigger and bigger as new things are added.

But, in Razor Pages, each app page is self-contained with its own view and code organized together, which as a results, is less complex than the MVC.

Overall, ASP.NET Core is a Modular Web Framework.

In MVC, if you add new things, then the .NET framework will force you to release a new version.

For example, Microsoft released routing in MVC 4, and later, they released Attribute routing for which they again had to release another new framework MVC 5.

In ASP.NET Core, on the other hand, everything is managed using the NuGet package, which means it is easier than MVC to upgrade existing framework without releasing new .net framework version every time new things are added.

Additionally, In .NET Core, any community can release an update for new NuGet package version, and you can receive the latest changes by just updating your NuGet packages.

A Quick Comparison of How Requests Are Handled in Both

We explained in above points that building a web application using ASP.NET Core Razor Pages is less complex than the MVC. Here, we will demonstrate that with action.

Let’s start with MVC

Here’s a quick overview of how MVC handles the requests.


As you can see, routing is the key to how MVC decides to handle requests. The default configuration for routing is the combination of action and controller names.

So if you request for /staff/index, then it will route you the action named Index on the StaffController class.

But, it can be customized or configured to route any request to any controller with a block of code.

Now Compare the Same with Razor Pages

Here’s a quick overview of how Razor Pages handle the requests.

The difference between the two is that in Razor Pages, when you make a request, the default routing configuration will find a Razor Page for that specific request in the Pages folder.

Suppose you make a request for /contact/, then ASP.NET Core will look for a page having same name that you used in request and will route you directly to it.

That means, a request to /contact/ will route you to Contact.cshtml

And for any .cshtml file to be considered as a Razor Page, it must be placed in the Pages folder and also contain @Page in its markup.

This way, the Razor Page will then act as a controller action. Now comparing this to MVC, the configuring the custom route will be less complex as there will no extra coding involved.

In Conclusion

Razor Pages seem like a promising start for modern web app development with less complexity in 2018. And as the comparison has shown that it provides the benefit of containing everything related to a particular request at one place, where in MVC requires spreading pieces all around your app like a giant puzzle which you will then have to put back together with extra efforts of coding.

Angular Hosting :: How to Secure Your Angular Application

clock February 26, 2019 06:23 by author Jervis

Software security is major concern for every application. There are some common vulnerabilities reported for web application that we need to care for all the application. In this article, I will discuss the vulnerabilities possible with Angular application and how to prevent these vulnerabilities by using best practices.

Prevent application from Cross-Site Scripting (XSS)

XSS allows attackers to inject client-side script or malicious code into web pages that can be view by the other users. This kind of attack mostly happened via the query string, input field, request headers. For Preventing XSS attacks, we must present user to enter malicious code from DOM. For example, attacker might enter some script tag to input filed and that might render as read-only text.

By default, Angular treats all values as untrusted when the values are inserted into the DOM via the attribute, interpolation, properties, etc. It escapes and sanitizes values before render. The XSS related security in Angular defined in "BrowserModule". Angular's DomSanitizer help to clean untrusted parts of the value. The DomSanitizer class looks like as following.

export declare abstract class DomSanitizer implements Sanitizer {
abstract sanitize(context: SecurityContext, value: SafeValue | string | null): string | null;
abstract bypassSecurityTrustHtml(value: string): SafeHtml;
abstract bypassSecurityTrustStyle(value: string): SafeStyle;
abstract bypassSecurityTrustScript(value: string): SafeScript;
abstract bypassSecurityTrustUrl(value: string): SafeUrl;
abstract bypassSecurityTrustResourceUrl(value: string): SafeResourceUrl;

Here, there are two type of method patterns: sanitize and bypassSecurityTrustX (bypassSecurityTrustHtml, bypassSecurityTrustStyle, etc.). The sanitize method gets untrusted value from the context and returns the trusted value. The bypassSecurityTrustX methods gets untrusted values from the context and according to the value usage it returns a trusted value.In specific condition, we might require disabling sanitization. By setting up any one bypassSecurityTrustX methods, We can bypass security and binding the value.


import {BrowserModule, DomSanitizer} from '@angular/platform-browser' 

selector: test-Component',
template: `
<div [innerHtml]="myHtml"></div>
export class App {
public myHtml: string;
constructor(private sanitizer: DomSanitizer) {
this. myHtml = sanitizer.bypassSecurityTrustHtml('<h1>Example: Dom Sanitizer: Trusted HTML </h1>') ;

Be careful when turn-off or bypass any security setting that might malicious code and we might inject security vulnerability to our application.Sanitization inspect all the untrusted values and convert it into a value that is safe to insert into the DOM tree. It does not change the value at all the time and Angular allows some untrusted values for HTML, styles, and URLs. Angular defined some security contexts as following.

  • It uses HTML context when interpreting a value as HTML
  • It uses Style context when any CSS bind into the style property
  • It uses URL context when bind URL (example<a href>)

Angular generate the warning and print into the console when it changes the value during sanitization.



Use Route guards when required

The router guards are the interface that tell the weather route to request URL or not. It makes a decision by interface return value i.e. if interface return true then it routes to the new URL else not. There are mainly five type of guards and all are called in particular sequence. We can modify routing behavior depending on which guard used. Following are the provided route guard

  • CanActivate : checks the route access
  • CanActivateChild :Checks the child route access
  • CanDeactivate : it asks the permission to discard the changes
  • CanLoad : Checks before load feature module
  • Resolve : it pre-fetch the route data

In the following example code, I have implemented CanActivate route guard that allowed route if token data available in local storage else redirect to login page. In the route guards, we can put any kind of checking such as user roles has right to access page, etc.

Route Guard Example

import { Injectable } from '@angular/core';
import { Router, CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router'; 

export class AuthorizationCheck implements CanActivate { 

constructor(private router: Router) { } 

canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot) {
//If token data exist, user may login to application
if (localStorage.getItem('TokenInfo')) {
return true;

// otherwise redirect to login page with the return url
this.router.navigate(['/login'], { queryParams: { returnUrl: state.url } });
return false;

We can apply this route guard to the route in RouteModule. Following example code, I have defined route app.module file.

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AuthorizationCheck } from './Services/authorizationCheck'
declarations: [
imports: [
{ path: '', component: HomeComponent, pathMatch: 'full', canActivate: [AuthorizationCheck] },
{ path: 'counter', component: CounterComponent, canActivate: [AuthorizationCheck] },
export class AppModule { }

In this way, we can protect our route, and this cannot be easily hackable. However, if user has knowledge about the system then he/she can brake route guard.

Remove Local storage and windows session storage data after logout from application

After, successful login to the application, generally we store user data such as user name, authentication token, etc. to either local storage or windows session storage. This type of user information may use by the hacker/attacker and take access of application if they are available after user logout.

Local storage can be shared between multiple tabs as well as multiple browser session. Windows session storage can only accessible to particular browser session and it kill when browser is closed. I recommended to use windows session storage instead of local storage to store such information. But it is depending on the application requirement. So, it is best practice to remove such data from local storage and windows session storage after logout.

Implement CSP (Content Security Policies)

It added layer of security which help us to detect and mitigate certain types of attacks (including data injection and XSS attack). To enable Content Security Policy, our web server (API) must return appropriate Content-Security-Policy HTTP header. We can implement CSP either by using HTTP Meta tag or defining "Content-Security-Policy" to header.

<meta http-equiv="Content-Security-Policy" content="default-src; child-src 'none'; object-src 'none'">


Content-Security-Policy: script-src 'self'

Do not use DOM’s APIs directly

Angular recommended to use Angular templates rather than using DOM API such as document, ElementRef etc. Angular does not have control over these DOM API, so it does not provide the protection against security vulnerabilities and attacker can inject malicious code in DOM tree.

Prevent CSRF (Cross-site request forgery)

It is also known as Session riding. Attacker copies forge as a trusted source and execute actions on user behalf. This kind of attack can damage both client relation and business. The common mechanism used by HttpClient to support CSRF attack protection. when application made any http request, interceptor reads the information about the token from the cookies and set HTTP header. The interceptor sends application cookies on all request such as POST etc. to the relative URL, but it does not send cookies with HEAD/GET request and request with absolute URL.

So, server need to set a token in JavaScript readable session cookie on first GET request or page load. With subsequent requests the server verify this token with request header cookies. In this way, server can sure that code running on same domain. This token must be unique for each user and verified by the server. CSRF protection also need to apply to server (our back-end service) as well. In angular application, we can use different names for XSRF token cookie or header. We can override the defaults value by using HttpClientXsrfModule.withOptions method.

imports: [
 cookieName: 'my-Cookie',
 headerName: 'my-Header',

Prevent Cross-Site Script Inclusion (XSSI)

Cross-site script inclusion (CSSI) is also known as JSON vulnerability that allows the attacker to read data from JSON API. Attacker can override the native JavaScript object constructor and include an API URL using script tag. The attacker may be getting success if returned JSON is executable as JavaScript.We can prevent an attack by prefixing all JSON responses with well-known string ")]}', \n". This makes JSON to non-executable. Angular library (HttpClient) recognize the "\n" character and makes script is not executable.

Up-to-date Angular Libraries

There are continuous updates provided for bug fixes, security patches and feature enhancements at regular interval, so it is recommended to update the Angular libraries. It is possible that security-related issue fixed in newer release that prevent any security vulnerabilities injected by attacker.

Avoid Modifying the Angular Copy

It is recommended to avoid Modifying the Angular Copy because it creates hard link between our application and Angular libraries. As describe in above point, there are continuous updates provided for bug fixes, security patches and feature enhancements in Angular libraries, so it very difficult to upgrade to newer Angular versions.

Use Offline Template Compiler

It would be recommended to use offline template compiler to prevent security vulnerabilities called template injection. It recommended to use the offline template compiler in production deployment. Basically, Angular trusts on template code, so someone can add vulnerabilities to dynamically created template as a result malicious attack on DOM tree.

Validate user submitted data on server-side code

It would be good practice to validate all the submitted data on server-side code. This can help to prevent data related vulnerabilities. Some time, attacker can use XSS method and try to inject malicious data to our application. If we validate the data at server-side as well, we can prevent our application from this kind of attack.

Do not use components with Known Vulnerabilities

There are many third-party libraries component available and nowadays it just impossible to develop the application without such libraries. These libraries may have known vulnerabilities and that can be used by the attacker to inject malicious code or data to our application. These libraries can have security vulnerabilities such as CSRF, XSS, buffer overflows, etc.


  • Download the library from trusted source
  • Always use updated version of library (it may fix some critical security defect in latest version)
  • Monitor the library's Vulnerabilities from the source such as NVD CVE



Crystal Report Hosting :: How to Display Crystal Report with Images Using Typed Dataset

clock February 18, 2019 08:01 by author Jervis

This document focuses on how to display crystal reports with images using typed dataset. While displaying reports instead of querying to database, we can use already filled data tables. This design is based on Crystal Reports for Visual Studio .NET.


This BOK explains simple steps on how to create crystal reports using typed dataset. The instances in this document have been illustrated considering “Company” table which stores general data as well Logo images.

Create typed dataset

This is company master table schema which stores company information and company logo images.

Database Table schema:


Follow these steps to add typed data set in your report project.

Go to Project, Add, New Item, Data, then Data Set

Once data set is added in your project then drag and drop database table from server explorer.

In this example I have used Company table.

Data Table:


Create report and browse data table

  • Instead of using direct table from database we can use data table from typed data set 
  • Dataset is available under Project Data, then ADO.Net DataSets
  • Select the data table from available data source and add to selected tables section.
  • Once data table is added click “Ok” button to map the data table in crystal report.

 Fill data table in typed dataset

  • Fetch the data from database and fill it in the data table in dataset. 
  • This method returns the data set in display report method

public ReportsDS Report_GetReportData()  
    SqlConnection sqlConn = new SqlConnection();  
    sqlConn.ConnectionString = connectionString;  
    ReportsDS ReportDS = new ReportsDS();  
    string CompanyId = CachingHelper.ReadFromCache(Constants.CompanyId);  
        DataSet ds = new DataSet();  
        SqlCommand cmd = new SqlCommand();  
        cmd.CommandType = CommandType.Text;  
        cmd.CommandText = "SELECT * FROM Company WHERE Id ='" + CompanyId + "'";  
        cmd.Connection = sqlConn;  
        SqlDataAdapter da = new SqlDataAdapter(cmd);  
        da.Fill(ReportDS, ReportDS.Company.TableName);  
    } catch (Exception ex)   
    } finally  
        if (sqlConn.State != ConnectionState.Closed) sqlConn.Close();  
    return ReportDS;  

Display Reports

  • Create instance of Report Document and set DataSource as data table
  • Set ReportSource as Report Document in crystal report viewer
  • Call DisplayReport() method in events like button click.

private void DisplayReport()  
        ReportsDs ReportDS = new ReportsDs();  
        ReportDocument obj = new ReportDocument();  
        ReportDS = Report_GetReportData();  
        obj.FileName = “rptCompanyReport.rpt”;  
        obj.SetDataSource((DataTable) ReportDS.Company);  
        crystalReportViewer1.ReportSource = obj;  
    } catch (Exception ex)   
        Logging.CustomizedException(ref ex, false);  


Windows 2016 Hosting :: How to Setup Windows Server 2016 with Static IP

clock January 29, 2019 08:00 by author Jervis

If you are setting up Windows Server 2016 as a domain controller or as any other production server function in your network it is recommended that you set it up with a static IP address. This is a quick how to guide on how to do that.

Note: you need an administrator account on the server to set your Windows 2016 server with a static IP address 

Login to your Windows 2016 server, and click on the Start button, and then click on the Control Panel:

Then click on View network status and tasks under the Network and Internet applet:

Then click on Change adapter settings on the left menu:

then right-click on your network connection, and select Properties:

Select Internet Protocol Version 4 (TCP/IPv4) and then click on Properties:

Enter the IP address you want to assigned to this server, the Network Mask, Default Gateway, and the DNS IP address:

Click  OK and then reboot the server.

Windows 2016 Hosting :: How to Create and Configure VMs in Windows Server 2016 Hyper-V

clock January 21, 2019 08:04 by author Jervis

In this post, we will explore how to create and configure VMs in Windows Server 2016 Hyper-V.

Creating a New VM

First, you need to use the Hyper-V manager to connect to the Hyper-V host. The Hyper-V manager is included in the Remote Server Administration Tools (RSAT; a separate download) for client operating systems such as Windows 10, or included in the Server Manager “install features” section of Windows Server 2016.

To begin, right-click your Hyper-V host and select New > VM.

This launches the New Virtual Machine Wizard.

Begin the configuration by selecting a name for your VM.

Generation of the VM

Next, you are asked to select the Generation of the VM. There are two choices here: Generation 1 and Generation 2. What are the differences?

To start with, Generation-2 VMs are only compatible with Hyper-V versions 2012 R2 and later. Furthermore, Windows Server 2012/Windows 8 64-bit and above are supported with Generation-2; 32-bit versions of those operating systems do not work. In fact, if you create a Generation-2 VM and try to boot from an ISO of a 32-bit OS, you receive an error stating that no boot media can be found. Microsoft has also been working on support of Generation-2 VMs with Linux. Be sure to check with your particular distribution, as currently not all are supported with Generation 2. There is one more consideration: for those thinking of moving a previously-created Hyper-V VM to Azure, Generation 2 is not supported.

For greater compatibility including moving to Azure, Generation 1 VMs should be selected. If none of the limitations mentioned are true, and you want to utilize such features as UEFI secure boot, then Generation 2 would be the preferred choice.

Once a VM is created, you cannot change the Generation. Make sure you choose the right Generation before proceeding.

Memory Management in Hyper-V

The next configuration section is where we can Assign Memory.

The memory management in Hyper-V has an option called Dynamic Memory; you can see the checkbox that can be selected to enable the feature at this stage. If you choose to enable this option, Hyper-V cooperates with the VM guest operating system in managing guest operating system memory.

Using the “hot add” feature, Hyper-V expands the guest operating system memory as memory demands increase within the guest. Dynamic Memory helps to dynamically and automatically divide RAM between running VMs, reassigning memory based on changes in their resource demands. This helps to provide more efficient use of memory resources on a Hyper-V host as well as greater VM density.

When you select Use Dynamic Memory for this virtual machine, you can set minimum and maximum values for the RAM that is dynamically assigned to the VM.

Networking Configuration

The next step in our VM configuration is to Configuring Networking. In order for a particular VM to have connectivity to the network, you must attach a virtual switch that is connected. You can also leave a VM in a disconnected state; connection to a network is not a requirement in completing VM configuration. In this example, we are connecting the VM to the ExternalSwitch, which is a virtual switch connected with the production LAN.

Hard Disk Configuration

The next step is configuring the hard disk that is assigned to your VM. There are three options that you can choose from:

If you choose the Create a virtual hard disk option, you are creating a brand new vhdxdisk on your Hyper-V host. You can set the size of the disk as well. The wizard defaults to 127 GB, which can easily be changed.

The Use an existing virtual hard disk option lets you attach your new VM configuration to an existing virtual disk. Perhaps you copied over a vhdx file that you want to reuse with the new VM configuration. You can simply point the wizard to the vhdx file with this option.

With the third option – Attach a virtual hard disk later – you can choose to skip the creation of a hard disk in the wizard and assign a disk later.

There is one significant caveat to the create a virtual hard disk option: you have no choice in the type of disk that is created. By default, Hyper-V creates “dynamically expanding” disks, which are thin-provisioned disks. Space is used only as needed. There are some downsides to this approach, however. While the Hyper-V storage driver generally makes efficient use of resources, for the best performance, many may still prefer to provision thick disks or fixed size in Hyper-V. To do that, you should choose the third option and attach a thick virtual hard disk after your VM is created.

Installation Options

The next step is to go through the Installation Options. This means configuring how you want to install the guest operating system (OS) in your new VM.

The most common way is to Install an operating system from a bootable image file. You need to have an ISO file of the OS saved somewhere on your server. Simply guide the Wizard to the location using the Browse button.

Your alternatives are to Install an operating system later or Install an operating system from a network-based installation server.

You’ve now reached the summary of your configuration choices. Once you click Finish, your VM is created according to the options you specified.

Now that configuration and creation are complete, you can power on your VM. Simply right-click the VM and select Start.


You can connect to the console by right-clicking the VM and selecting Connect.


After connecting to the console, we should now be able to boot our VM and install the operating system as usual, through the operating system installation prompts.

Windows Server 2019 Hosting :: Top 6 Features in Windows Server 2019

clock January 11, 2019 07:43 by author Jervis

Windows Server 2019 is now generally available to the public! As you know, whenever Windows gets ready to make a major operating system release, it’s time to prepare for some changes. In this piece, we’ll give you a crash course in what to be excited (or worried) about in Server 2019, provide an overview of some exciting new features, and discuss how you can get your hands on Microsoft’s latest server operating system.

What do you expect with this new Windows 2019? Let’s get started. For your information, we as Microsoft hosting partner will also support this latest Windows Server 2019 on our hosting environment soon.

1. Enterprise-grade hyperconverged infrastructure (HCI)

With the release of Windows Server 2019, Microsoft rolls up three years of updates for its HCI platform. That’s because the gradual upgrade schedule Microsoft now uses includes what it calls Semi-Annual Channel releases – incremental upgrades as they become available. Then every couple of years it creates a major release called the Long-Term Servicing Channel (LTSC) version that includes the upgrades from the preceding Semi-Annual Channel releases.The LTSC Windows Server 2019 is due out this fall, and is now available to members of Microsoft’s Insider program.

While the fundamental components of HCI (compute, storage and networking) have been improved with the Semi-Annual Channel releases, for organizations building datacenters and high-scale software defined platforms, Windows Server 2019 is a significant release for the software-defined datacenter.

With the latest release, HCI is provided on top of a set of components that are bundled in with the server license. This means a backbone of servers running HyperV to enable dynamic increase or decrease of capacity for workloads without downtime.

2. GUI for Windows Server 2019

A surprise for many enterprises that started to roll-out the Semi-Annual Channel versions of Windows Server 2016 was the lack of a GUI for those releases.  The Semi-Annual Channel releases only supported ServerCore (and Nano) GUI-less configurations.  With the LTSC release of Windows Server 2019, IT Pros will once again get their desktop GUI of Windows Server in addition to the GUI-less ServerCore and Nano releases.

3. Project Honolulu

With the release of Windows Server 2019, Microsoft will formally release their Project Honolulu server management tool. Project Honolulu is a central console that allows IT pros to easily manage GUI and GUI-less Windows 2019, 2016 and 2012R2 servers in their environments. 

Early adopters have found the simplicity of management that Project Honolulu provides by rolling up common tasks such as performance monitoring (PerfMon), server configuration and settings tasks, and the management of Windows Services that run on server systems.  This makes these tasks easier for administrators to manage on a mix of servers in their environment.

4. Improvements in security

Microsoft has continued to include built-in security functionality to help organizations address an “expect breach” model of security management.  Rather than assuming firewalls along the perimeter of an enterprise will prevent any and all security compromises, Windows Server 2019 assumes servers and applications within the core of a datacenter have already been compromised. 

Windows Server 2019 includes Windows Defender Advanced Threat Protection (ATP) that assess common vectors for security breaches, and automatically blocks and alerts about potential malicious attacks.  Users of Windows 10 have received many of the Windows Defender ATP features over the past few months. Including  Windows Defender ATP on Windows Server 2019 lets them take advantage of data storage, network transport and security-integrity components to prevent compromises on Windows Server 2019 systems.

5. Smaller, more efficient containers

Organizations are rapidly minimizing the footprint and overhead of their IT operations and eliminating more bloated servers with thinner and more efficient containers. Windows Insiders have benefited by achieving higher density of compute to improve overall application operations with no additional expenditure in hardware server systems or expansion of hardware capacity.

Windows Server 2019 has a smaller, leaner ServerCore image that cuts virtual machine overhead by 50-80 percent.  When an organization can get the same (or more) functionality in a significantly smaller image, the organization is able to lower costs and improve efficiencies in IT investments.

6. Windows subsystem on Linux

A decade ago, one would rarely say Microsoft and Linux in the same breath as complimentary platform services, but that has changed. Windows Server 2016 has open support for Linux instances as virtual machines, and the new Windows Server 2019 release makes huge headway by including an entire subsystem optimized for the operation of Linux systems on Windows Server.

The Windows Subsystem for Linux extends basic virtual machine operation of Linux systems on Windows Server, and provides a deeper layer of integration for networking, native filesystem storage and security controls. It can enable encrypted Linux virtual instances. That’s exactly how Microsoft provided Shielded VMs for Windows in Windows Server 2016, but now native Shielded VMs for Linux on Windows Server 2019. 

Enterprises have found the optimization of containers along with the ability to natively support Linux on Windows Server hosts can decrease costs by eliminating the need for two or three infrastructure platforms, and instead running them on Windows Server 2019. 

Because most of the “new features” in Windows Server 2019 have been included in updates over the past couple years, these features are not earth-shattering surprises.  However, it also means that the features in Windows Server 2019 that were part of Windows Server 2016 Semi-Annual Channel releases have been tried, tested, updated and proven already, so that when Windows Server 2019 ships, organizations don’t have to wait six to 12 months for a service pack of bug fixes.

This is a significant change that is helping organizations plan their adoption of Windows Server 2019 sooner than orgs may have adopted a major release platform in the past, and with significant improvements for enterprise datacenters in gaining the benefits of Windows Server 2019 to meet security, scalability, and optimized data center requirements so badly needed in today’s fast-paced environment.



IIS Hosting :: Tips to Monitor Your IIS Performance

clock December 21, 2018 08:03 by author Jervis

Need help on how to monitor IIS? This guide covers how to cover the basics including HTTP ping checks, IIS Application Pools, and important Windows Performance Counters. We also take a look at how to use an application performance management system to simplify all of this and get more advanced IIS performance monitoring for ASP.NET applications.

From Basics to Advanced IIS Performance Monitoring:

  • Ensuring your IIS Application is running
  • Windows performance counters for IIS & ASP.NET
  • Advanced IIS performance monitoring for ASP.NET

How to Monitor if Your IIS Application is Running

The first thing you want to do is setup monitoring to ensure that your application is running.

Website Monitor via HTTP Testing

One of the best and easiest things you can do is set up a simple HTTP check that runs every minute. This will give you a baseline to know if your site is up or down. It can also help you track how long it takes to respond. You could also monitor for a 200 OK status or if the request returns specific text that you know should be included in the response.

Monitoring IIS via a simple HTTP check is also a good way to establish a basic SLA monitor. No matter how many servers you have, you can use this to know if your web application was online and available.

Here is an example of one of our HTTP checks we use against Elasticsearch to help with monitoring it. We do this via Retrace; you could also you tools like Pingdom. In this example, we receive alerts if the number_of_nodes is not what we are expecting or if it doesn’t find an HTTP status of 200 OK.

Ensure Your IIS Application Pool is Running

If you have been using IIS very long, you have probably witnessed times when your application mysteriously stops working. After some troubleshooting, you may find that your IIS Application Pool is stopped for some reason, causing your site to be offline.

Sometimes an IIS Application Pool will crash and stop due to various fatal application errors, issues with the user the app pool is running under, bad configurations, or other random problems. It is possible to get it into a state where it won’t start at all due to these type of problems.

It is a good best practice always to monitor that your IIS Application Pool is started. It runs as w3wp.exe. Most monitoring tools have a way to monitor IIS Application Pools. Our product, Retrace, monitors them by default.

One weird thing about app pools is they can be set to “Started” but may not actually be running as w3wp.exe if there is no traffic to your application. In these scenarios, w3wp.exe may not be running, but there is no actual problem. This is why you need to monitor it via IIS’s status and not just look for w3wp.exe to be running on your server.

Recommended Performance Counters for IIS Monitoring

One of the advantages of using IIS as a web server is all of the metrics available via Windows Performance Counters. There is a wide array of them available between IIS, ASP.NET and .NET. For this guide on IIS performance monitoring, I am going to review some of the top Performance Counters to monitor.

System/Process Counters

  • CPU %: The overall server and CPU usage for your IIS Worker Process should be monitored.
  • Memory: You should consider tracking the currently used and available memory for your IIS Worker Process.

IIS Performance Counters

  • Web Service – Bytes Received/Sec: Helpful to track to identify potential spikes in traffic.
  • Web Service – Bytes Sent/Sec: Helpful to track to identify potential spikes in traffic.
  • Web Service – Current Connections: Through experience with your app you can identify what is a normal value for this.

ASP.NET Performance Counters

  • ASP.NET Applications – Requests/Sec: You should track how many requests are handled by both IIS and ASP.NET. Some requests, like static files, could only be processed by IIS and never touch ASP.NET.
  • ASP.NET Applications – Requests in Application Queue: If this number is high, your server may not be able to handle requests fast enough.
  • .NET CLR Memory – % Time in GC: If your app spends more than 5% of its time in garbage collection, you may want to review how object allocations are performed.

ASP.NET Error Rate Counters

  • .NET CLR Exceptions – # of Exceps Thrown: This counter allows you track all .NET exceptions that are thrown even if they are handled and thrown away. A very high rate of exceptions can cause hidden performance problems.
  • ASP.NET Applications – Errors Unhandled During Execution/sec: The number of unhandled exceptions that may have impacted your users.
  • ASP.NET Applications – Errors Total/Sec: Number of errors during compilations, pre-processing and execution. This may catch some types of errors that other Exception counts don’t include.

You should be able to monitor these Windows Performance Counters with most server monitoring solutions.

Note: Some Windows Performance Counters are difficult to monitor because of the process name or ID changes constantly. You may find it hard to monitor them in some server monitoring solutions due to this.

Advanced IIS Performance Monitoring for ASP.NET

Some application monitoring tools, like Retrace, are designed to provide holistic monitoring for your ASP.NET applications. All you have to do is install them, and they can auto-detect all of your ASP.NET applications and automatically start monitoring all the basics. Including key Performance Counters and if your IIS Site and Application Pool are running.

Retrace also does lightweight profiling of your ASP.NET code. This gives you code-level visibility to understand how your application is performing and how to improve it.

OWIN Hosting :: Introduction about OWIN

clock November 12, 2018 07:34 by author Jervis


If you look at the current web stacks in open-source, it is fast evolving with wide-range of capabilities getting added day by day.  On the other hand, Microsoft too is constantly working to update its web application stack and released many new framework components. Though Microsoft’s Asp.Net is very mature framework, it had lacked some basic qualities like portability, modularity and scalability which the web stacks in Open Source communities were offering. This had led to the development of OWIN, a specification how an Asp.Net application and hosting servers has to be built to work without any dependency on each other and with minimal runtime packages. By implementing OWIN specification, Asp.Net can become more modular, have better scalability, it can be easily ported to different environments, and thus making it competitive with its open source counterparts. Besides this, it is also aimed to nurture the .net open source community participation for its framework and tooling support.


OWIN stands for Open Web Interface for .Net. It is a community-owned specification (or standard) and not a framework of its own. OWIN defines an interface specification to de-couple webserver and application using a simple delegate structure. We will discuss more about this delegate later in this article. Now, let’s take a closer look at classic Asp.Net framework’s design issues in detail and how OWIN tries to mitigate it.

ASP.Net - Webserver Dependencies

Asp.Net framework is strongly dependent on IIS and its capabilities, so it can be hosted only within IIS. This has made the portability of Asp.Net application an impossible task. In particular, Asp.Net applications are basically build upon the assembly called System.Web which in turn heavily depends on IIS for providing many of the web infrastructure features like request/response filtering, logging, etc.

System.Web assembly also includes many default components that are plugged into the Http pipeline regardless of its usage in the application. This means there are some unwanted features that are executed in the pipeline for every request which degrades the performance. This has made the current open source counter-parts like NodeJs, Ruby, etc. perform way better than Asp.Net framework.

To remove these dependencies, to make it more modular and to build a loosely coupled system, the OWIN specification is built. In simple terms, OWIN removes Asp.Net application dependency on System.Web assembly at first place. That being said, OWIN is not designed to replace entire Asp.Net framework or IIS as such, thus when using OWIN model we are still going to develop an Asp.Net web application in the same way we were doing all these days but with some changes in infrastructure services of Asp.Net framework.

The other major drawback of System.Web assembly is it is bundled as part of .Netframework installer package. This has made the delivery of updates and bug-fixes to Asp.Net components a difficult and time consuming task for the Microsoft’s Asp.Net team. So, by removing the dependency with System.Web assembly, Microsoft is now delivering its owin web stack updates faster through its Nuget package manager.

Implementing OWIN

As I said before, OWIN is not an implementation by itself. It just defines a simple delegate structure commonly called as Application Delegate or AppFunc designed for the interaction between webserver and application with less dependency. AppFunc delegate signature below.

Func<IDictionary<string, object>, Task> 

This delegate takes a Dictionary object (IDictionary<string, object>) as a single argument called Environment Dictionary and it returns a Task object. This Dictionary object is mutable, meaning it can be modified further down the process. All applications should implement this delegate to become OWIN complaint.

In an OWIN deployment, the OWIN host will populate the environment dictionary with all necessary information about the request and invoke it. This means it is the entry point to the application, in other words, it is where the applications startup/bootstrap happens. Hence, this is called the Startup class. The application can then modify or populate the response in the dictionary during its execution. There are some mandatory key/values in the environment dictionary which the host will populate before invoking application.

Below are the components of Owin-based application that makes this happen. From OWIN spec,

  • Server — The HTTP server that directly communicates with the client and then uses OWIN semantics to process requests. Servers may require an adapter layer that converts to OWIN semantics.
  • Web Framework — A self-contained component on top of OWIN exposing its own object model or API that applications may use to facilitate request processing. Web Frameworks may require an adapter layer that converts from OWIN semantics.
  • Web Application — A specific application, possibly built on top of a Web Framework, which is run using OWIN compatible Servers.
  • Middleware — Pass through components that form a pipeline between a server and application to inspect, route, or modify request and response messages for a specific purpose.
  • Host — The process an application and server execute inside of, primarily responsible for application startup. Some Servers are also Hosts.

Building an OWIN complaint application

To be OWIN complaint, our Asp.Net application should implement the application delegate AppFunc. With current set of framework components, we also need the actual OWIN implementation for host, application and infrastructure service components. So, building Owin complaint application is not just implementing AppFunc delegate alone it also requires other components. Here comes the need of the Project Katana, which is Microsoft’s own implementation of this specification.

The Asp.Net's infrastructure services like Authentication, Authorization, routing services and other request/response filtering has to be done by OWIN middleware pass-through components to prevent its dependency with IIS. These middleware components resembles the Http modules in the traditional Asp.Net pipeline. They are called in the same order they are added in the Startup class similar to HttpModule events subscription in classic Asp.Net application object(Global.asax). To recall, Owin’s AppFunc delegate implementation in our application is commonly called as Startup class. We will understand it better when we build our first application.

Project Katana has evolved so much after its initial release and it is now fully incorporated into the newest version of Asp.Net called Asp.Net Core. Next section will provide us a brief history of OWIN implementation from Project Katana to Asp.Net Core releases.

Project Katana to Asp.Net Core

Project Katana is Microsoft’s first own implementation of OWIN specification and it is delivered as Nuget packages. Developers can include these packages from Nuget and start working.

Microsoft planned for Asp.Net vNext, the next version after Asp.Net 4.6 with full support of OWIN and thus Project Katana was slowly retiring. Note – Any project implementation with Katana libraries will continue to work as expected.

.Net Core 1.0 is another implementation of .NetFramework. .Net Core is a portable, open-source, modular framework and it is re-built from scratch with new implementation of CLR. The Asp.Net vNext, the next version of Asp.Net was renamed as Asp.Net 5.0 and is capable of running on .Net Core framework and latest .Net framework 4.6.2

Asp.Net 5.0 was renamed to Asp.Net Core since this framework was re-written from scratch and Microsoft felt the name was more appropriate. Asp.Net Core is delivered as Nuget packages and it will run on both .Net Core 1.0 and .NetFramework 4.5.1+ frameworks. So, the latest version of Asp.Net is now officially called as Asp.Net Core.

Though OWIN and Project Katana was released years ago, there were lots of updates happened to its implementation all these days. Hope this article helped you understand the current status and start the learning process of building Owin-based application.


We’re a company that works differently to most. Value is what we output and help our customers achieve, not how much money we put in the bank. It’s not because we are altruistic. It’s based on an even simpler principle. "Do good things, and good things will come to you".

Success for us is something that is continually experienced, not something that is reached. For us it is all about the experience – more than the journey. Life is a continual experience. We see the Internet as being an incredible amplifier to the experience of life for all of us. It can help humanity come together to explode in knowledge exploration and discussion. It is continual enlightenment of new ideas, experiences, and passions

 photo ahp banner aspnet-01_zps87l92lcl.png

Author Link

Corporate Address (Location)

170 W 56th Street, Suite 121
New York, NY 10019
United States

Sign in