Windows 2012 Hosting - MVC 6 and SQL 2014 BLOG

Tutorial and Articles about Windows Hosting, SQL Hosting, MVC Hosting, and Silverlight Hosting

SQL 2014 Hosting - ASPHostPortal.com :: Introduction In-Memory OLTP in SQL 2014

clock July 4, 2014 06:49 by author Jervis

Microsoft's new release of SQL Server 2014 comes pretty close on the heels of the last SQL Server 2012 release. For many organizations, this could be a hurdle to adoption, because upgrading core pieces of an IT infrastructure can be both costly and resource-intensive. However, SQL Server 2014 has several compelling new features that can definitely justify an upgrade. Here are the overview of SQL 2014 features:

1. New In-Memory OLTP Engine
2. Enhanced Windows Server 2012 Integration
3. Improvement in Business Intelligence
4. Office 365 Integration
5. Etc

In today post, I want to examine SQL Server 2014 In-Memory OLTP from different angles: how to start using it, provide directions for migration planning, review closely many of its limitations, discuss SQL 2014 In-Memory OLTP applicability and see where the SQL Server In-Memory OLTP can be an alternative to in-memory dynamic caching, and where it is complimentary.

The question now is what is Memory Online Transaction Processing?

SQL Server 2014’s biggest feature is definitely its In-Memory transaction processing, or in-memory OLTP, which Microsoft claims make database operations much faster. In-memory database technology for SQL Server has long been in the works under the code name “Hekaton”. Hekaton is a database engine component which is optimized for accessing Memory resident tables. This component is great, it is fully integrated into SQL 2014 database engine.

The Function of Memory Online Transaction Processing

Hekaton facilitates creation of Memory resident Tables (i.e. Memory Optimized Tables) and Indexes. Beside that, it also provide the option to compile the Transact-Sql Stored Procedure accessing Memory Optimized Tables to Machine code. With Memory Optimized Tables, it provide better performance as the core engine uses the lock free algorithm which doesn’t require any lock and latches when the Memory optimized tables are referenced during the transaction processing.

Low Cost Using In Memory OLTP

Sql Server database engine was designed in the days when Main Memory was very costly. As per this design data is stored on the disk and is loaded to the main memory as required for the transaction processing and any changes to the In-Memory data is written back to the disk. This disk IO is main bottle neck for the OLTP applications having huge number of concurrent users, as it involves waiting for locks to be released, latches to be available, waiting for the log writes to complete.

As per the current trend Main Memory prices are less expensive and enterprises can easily afford to have production database servers with Main Memory sizes in TB’s. And this declining Memory prices made Microsoft to re-think on the initial database engine which is designed in the days when Main Memory was costly. And the result of this re-think is the In-Memory OLTP (a.k.a. Hekaton) Database engine component which supports memory resident Tables and Index. In-Memory OLTP engine uses the lock free algorithm (i.e. MultiVersion Optimistic Concurrency Control) which doesn’t require any lock and latches when the Memory optimized tables are referenced during the transaction processing. And for supporting data durability it still writes to the transaction log but the amount of data which is written to the log is reduced considerably.

Migrating to SQL Server 2014 In-Memory OLTP

Migration to In-Memory OLTP has to be performed in a development environment and carefully tested. Your High-Availability design, Databases design, Tables schemas and data, stored procedures, business logic in the database and even application code – all may require many syntax changes to use In-Memory OLTP.

This is not a “click and migrate” process. It requires development cycles, application and database design and code changes.

The right way to use the In-Memory OLTP engine is:

  • Plan your production database architecture. The In-Memory OLTP is very different and has many limitations in terms of H/A, Mirroring, Replications available functionalities;
  • Plan carefully your new database (and possibly application) design;
  • Migrate several specific tables and procedures that are good benefit candidates;
  • Develop or change your business-logic to fit the new design;
  • Test and evaluate;
  • Deploy

To evaluate whether the In-Memory OLTP can improve your database performance, you can use Microsoft new AMR tool (Analysis, Migrate and Report). For helping with actual migration you can use the Memory Optimization Advisor for tables and the Native Compilation Advisor to help porting a stored procedure to a natively compiled stored procedure.

The AMR tool helps identifying the tables and stored procedures that would benefit by moving them into memory and also help performing the actual migration of those database objects. The AMR tool is installed when you select the “Complete” option of “Management Tools”, and is later accessed through SQL Server Management Studio (SSMS) in Reports  –>> Management Data Warehouse Transaction performance reports tool:

The AMR tool provides reports which tables and procedures can benefit the most from In-Memory OLTP and provide a hint how complex will be the conversion. The reports show either recommendations based on usage, contention and performance. Here is example (graphics may change in the GA release):

After you identify a table that you would like to port to use In-Memory OLTP, you can use the Memory-Optimization Advisor to help you migrate the disk-based database table to In-Memory OLTP. In SSMS Object Explorer, right click the table you want to convert, and select Memory-Optimization Advisor.

Conclusion

Above article is only brief information about one of new feature in SQL 2014. Want to try more? We have supported the latest SQL 2014 hosting on our hosting environment. Just take a look on our site for more information.



Windows 2012 Hosting :: Hyper-v 3.0 Network Virtualization on Windows 2012

clock August 28, 2013 10:21 by author Mike

Windows Server introduces a slew of new technologies. These technologies enable Windows Server systems and virtual environments to meet all manner of new requirements and scenarios, including private and public cloud implementations. Often, this type of scenario involves a single infrastructure that's shared by different business units or even different organizations. 

In this article, I want to describe Network virtualization. Other great capabilities include a new site-to-site VPN solution; huge enhancements to the Server Message Block (SMB) protocol, enabling VMs to run from a Server 8 file share; native NIC teaming; and consistent device naming. But I want to focus on the major network technologies that most affect virtualization.

Virtualization has always striven to abstract one resource layer from another, giving improved functionality and portability. But networking hasn't embraced this goal, and VMs are tied to the networking configuration on the host that runs them. Microsoft System Center Virtual Machine Manager (VMM) 2012 tries to link VMs to physical networks through its logical networks feature, which lets you create logical networks such as Development, Production, and Backup. You can then create IP subnets and virtual LANs (VLANs) for each physical location that has a connection to a logical network. This capability lets you create VMs that automatically connect to the Production network, for example; VMM works out the actual Hyper-V switch that should be used and the IP scheme and VLAN tag, based on the actual location to which the VM is deployed.

This feature is great. But it still doesn't help in scenarios in which I might be hosting multiple tenants that require their own IP schemes, or even one tenant that requires VMs to move between different locations or between private and public clouds, without changing IP addresses or policies that relate to the network. Typically, public cloud providers require clients to use the hosted IP scheme, which is an issue for flexible migration between on-premises and off-premises hosting.

Both these scenarios require the network to be virtualized, and the virtual network must believe that it wholly owns the network fabric, in the same way that a VM believes it owns the hardware on which it runs. VMs don't see other VMs, and virtual networks shouldn't see or care about other virtual networks on the same physical fabric, even when they have overlapping IP schemes. Network isolation is a crucial part of network virtualization, especially when you consider hosted scenarios. If I'm hosting Pepsi and Coca-Cola on the same physical infrastructure, I need to be very sure that they can't see each other's virtual networks. They need complete network isolation.

This virtual network capability is enabled through the use of two IP addresses for each VM and a virtual subnet identifier that indicates the virtual network to which a particular VM belongs. The first IP address is the standard address that's configured within the VM and is referred to as the customer address (using IEEE terms). The second IP address is the address that the VM communicates over the physical network and is known as the provider address.

In the example that Figure 1 shows, we have one physical fabric. Running on that fabric are two separate organizations: red and blue. Each organization has its own IP scheme, which can overlap, and the virtual networks can span multiple physical locations. Each VM that is part of the virtual red or blue network has its own customer address. A separate provider address is used to send the actual IP traffic over the physical fabric.

Figure 1: Virtual networking example


Figure 1: Virtual networking example 

You can see that the physical fabric has the network and compute resources and that multiple VMs run across the hosts and sites. The color of the VM coordinates with its virtual network (red or blue). Even though the VMs are distributed across hosts and locations, the hosts in the virtual networks are completely isolated from the other virtual networks with their own IP schemes.

Two solutions-IP rewrite and Generic Routing Encapsulation (GRE)-enable network virtualization in Server 8. Both solutions allow completely separate virtual networks with their own IP schemes (which can overlap) to run over one shared fabric.

IP rewrite. The first option is IP rewrite, which does exactly what the name suggests. Each VM has two IP addresses: a customer address, which is configured within the VM, and a provider address, which is used for the actual packet transmission over the network. The Hyper-V switch looks at the traffic that the VM is sending out, looks at the virtual subnet ID to identify the correct virtual network, and rewrites the IP address source and target from the customer addresses to the corresponding provider addresses. This approach requires many IP addresses from the provider address pool because every VM needs its own provider address. The good news is that because the IP packet isn't being modified (apart from the address), hardware offloads such as virtual machine queue (VMQ), checksum, and receive-side scaling (RSS) continue to function. IP rewrite adds very little overhead to the network process and gives very high performance.

Figure 2 shows the IP rewrite process, along with the mapping table that the Hyper-V host maintains. The Hyper-V host maintains the mapping of customer-to-provider addresses, each of which is unique for each VM. The source and destination IP addresses of the original packet are changed as the packet is sent via the Hyper-V switch. The arrows in the figure show the flow of IP traffic.

Figure 2: IP rewrite process


Figure 2: IP rewrite process 

GRE. The second option is GRE, an Internet Engineering Task Force (IETF) standard. GRE wraps the originating packet, which uses the customer addresses, inside a packet that can be routed on the physical network by using the provider address and that includes the actual virtual subnet ID. Because the virtual subnet ID is included in the wrapper packet, VMs don't require their own provider addresses. The receiving host can identify the targeted VM based on the target customer address within the original packet and the virtual subnet ID in the wrapper packet. All the Hyper-V host on the originating VM needs to know is which Hyper-V host is running the target VM and can send the packet over the network.

The use of a shared provider address means that far fewer IP addresses from the provider IP pools are needed. This is good news for IP management and the network infrastructure. However, there is a downside, at least as of this writing. Because the original packet is wrapped inside the GRE packet, any kind of NIC offloading will break. The offloads won't understand the new packet format. The good news is that many major hardware manufacturers are in the process of adding support for GRE to all their network equipment, enabling offloading even when GRE is used.

Figure 3 shows the GRE process. The Hyper-V host still maintains the mapping of customer-to-provider address, but this time the provider address is per Hyper-V host virtual switch. The original packet is unchanged. Rather, the packet is wrapped in the GRE packet as it passes through the Hyper-V switch, which includes the correct source and destination provider addresses in addition to the virtual subnet ID.


Figure 3: GRE 

In both technologies, virtualization policies are used between all the Hyper-V hosts that participate in a specific virtual network. These policies enable the routing of the customer address across the physical fabric and track the customer-to-provider address mapping. The virtualization policies can also define the virtual networks that are allowed to communicate with other virtual networks. The virtualization policies can be configured by using Windows PowerShell, which is a common direction for Server 8. This makes sense: When you consider massive scale and automation, the current GUI really isn't sufficient. The challenge when using native PowerShell commands is the synchronous orchestration of the virtual-network configuration across all participating Hyper-V hosts.

Both options sound great, but which one should you use? GRE should be the network virtualization technology of choice because it's faster than IP rewrite. The network hardware supports GRE, which is important because otherwise GRE would break offloading, and software would need to perform offloading, which would be very slow. Also, because of the reduced provider address requirements, GRE places fewer burdens on the network infrastructure. However, until the networking equipment supports GRE, you should use IP rewrite, which requires no changes on the network infrastructure equipment.

 



Windows 2012 Hosting - ASPHostPortal :: Things to Know About Deduplication in Windows Server 2012

clock August 15, 2013 08:14 by author Jervis

Talk to most administrators about deduplication and the usual response is: Why? Disk space is getting cheaper all the time, with I/O speeds ramping up along with it. The discussion often ends there with a shrug.

But the problem isn’t how much you’re storing or how fast you can get to it. The problem is whether the improvements in storage per gigabyte or I/O throughputs are being outpaced by the amount of data being stored in your organization. The more we can store, the more we do store. And while deduplication is not a magic bullet, it is one of many strategies that can be used to cut into data storage demands.

Microsoft added a deduplication subsystem feature in Windows Server 2012, which provides a way to perform deduplication on all volumes managed by a given instance of Windows Server. Instead of relegating deduplication duty to a piece of hardware or a software layer, it’s done in the OS on both a block and file level — meaning that many kinds of data (such as multiple instances of a virtual machine) can be successfully deduplicated with minimal overhead.

If you plan to implement Windows Server 2012 deduplication technology, be sure you understand these seven points:

1. Deduplication is not enabled by default

Don’t upgrade to Windows Server 2012 and expect to see space savings automatically appear. Deduplication is treated as a file-and-storage service feature, rather than a core OS component. To that end, you must enable it and manually configure it in Server Roles | File And Storage Services | File and iSCSI Services. Once enabled, it also needs to be configured on a volume-by-volume basis.

2. Deduplication won’t burden the system

Microsoft put a fair amount of thought into setting up deduplication so it has a small system footprint and can run even on servers that have a heavy load. Here are three reasons why:

a. Content is only deduplicated after n number of days, with n being 5 by default, but this is user-configurable. This time delay keeps the deduplicator from trying to process content that is currently and aggressively being used or from processing files as they’re being written to disk (which would constitute a major performance hit).

b. Deduplication can be constrained by directory or file type. If you want to exclude certain kinds of files or folders from deduplication, you can specify those as well.

c. The deduplication process is self-throttling and can be run at varying priority levels. You can set the actual deduplication process to run at low priority and it will pause itself if the system is under heavy load. You can also set a window of time for the deduplicator to run at full speed, during off-hours, for example.

This way, with a little admin oversight, deduplication can be put into place on even a busy server and not impact its performance.

3. Deduplicated volumes are ‘atomic units’

‘Atomic units’ mean that all of the deduplication information about a given volume is kept on that volume, so it can be moved without injury to another system that supports deduplication. If you move it to a system that doesn’t have deduplication, you’ll only be able to see the nondeduplicated files. The best rule is not to move a deduplicated volume unless it’s to another Windows Server 2012 machine.

4. Deduplication works with BranchCache

If you have a branch server also running deduplication, it shares data about deduped files with the central server and thus cuts down on the amount of data needed to be sent between the two.

5. Backing up deduplicated volumes can be tricky

A block-based backup solution — e.g., a disk-image backup method — should work as-is and will preserve all deduplication data.

File-based backups will also work, but they won’t preserve deduplication data unless they’re dedupe-aware. They’ll back up everything in its original, discrete, undeduplicated form. What’s more, this means backup media should be large enough to hold the undeduplicated data as well.

The native Windows Server Backup solution is dedupe-aware, although any third-party backup products for Windows Server 2012 should be checked to see if deduplication awareness is either present or being added in a future revision.

6. More is better when it comes to cores and memory

Microsoft recommends devoting at least one CPU core and 350 MB of free memory to process one volume at a time, with around 100 GB of storage processed in an hour (without interruptions) or around 2 TB a day. The more parallelism you have to spare, the more volumes you can simultaneously process.

7. Deduplication mileage may vary

Microsoft has crunched its own numbers and found that the nature of the deployment affected the amount of space savings. Multiple OS instances on virtual hard disks (VHDs) exhibited a great deal of savings because of the amount of redundant material between them; user folders, less so.

In its rundown of what are good and bad candidates for deduping, Microsoft notes that live Exchange Server databases are actually poor candidates. This sounds counterintuitive; you’d think an Exchange mailbox database might have a lot of redundant data in it. But the constantly changing nature of data (messages being moved, deleted, created, etc.) offsets the gains in throughput and storage savings made by deduplication. However, an Exchange Server backup volume is a better candidate since it changes less often and can be deduplicated without visibly slowing things down.

How much you actually get from deduplication in your particular setting is the real test for whether to use it. Therefore, it’s best to start provisionally, perhaps on a staging server where you can set the “crawl rate” for deduplication as high as needed, see how much space savings you get with your data and then establish a schedule for performing deduplication on your own live servers.

 



Windows 2008 Hosting Tips :: Installing IIS 7.5 on Windows Server 2008/2008 R2

clock July 1, 2013 09:05 by author Mike

IIS (Internet Information Services) Version 7.5 – is a web server application and a set of feature extension modules created by Microsoft for use with Microsoft Windows. Released with Windows Server 2008 R2 and Windows 7. The installation procedure to install IIS 7.5 is slightly different depending on which operating system you are using, so you will need to make sure you follow the correct method below for your operating system.

Follow this steps to install:

  • Navigate to Start\Control Panel\Administrative Tools and open Server Manager.
  • In the tree menu on the left, select Roles.
  • In the main window, find the sub-section Roles Summary (it should be at the top) and click Add Roles.
  • On the pop-up window, read the security warnings and then select Next.
  • Select Web Server (IIS) from the list of check box options.
  • After reading the introduction to IIS, click Next to continue
  • From the check boxes, find the Application Development section and select CGI. This will allow us to install PHP later without any modifications.

 

  • Click Next to continue.
  • Click Install to begin the installation.
  • After installation has finished, click Close.

To check the installation, you can follow this steps:
- Navigate to Start\Control Panel\Administrative Tools and open Internet Information Services (IIS) Manager.

 

If it's installed properly you'll be introduced with the IIS Manager window, shown below.

That's it, IIS 7.5 is now installed and ready to be put into service.



IIS 7.5 Hosting - ASPHostPortal :: IIS 7.5 AppPool Identities

clock May 27, 2013 10:16 by author Ben

Windows 7 and Windows Server 2008 R2 ship with IIS 7.5. It's called Application Pool Identities. Application Pool Identities allows you to run Application Pools under an unique account without having to create and manage domain or local accounts. The name of the Application Pool account corresponds to the name of the Application Pool. The image below shows an IIS worker process (w3wp.exe) running as the DefaultAppPool identity.

Application Pool Identity Accounts
Worker processes in IIS 6 and 7 run as NETWORKSERVICE by default. NETWORKSERVICE is a built-in Windows identity. It doesn't require a password and it has only user privileges, i.e. it is relatively low-privileged. Running as a low-privileged account is a good security practice because then a software bug can't be used by a malicious user to take over the whole system. The problem is however that over time more and more Windows system services started to run as NETWORKSERVICE and services running as NETWORKSERVICE can tamper with other services running under the same identity. Because IIS worker processes run third-party code by default (Classic ASP, ASP.NET, PHP code) it was time to isolate IIS worker processes from other Windows system services and run IIS worker processes under unique identities. The Windows operating system provides a feature called "Virtual Accounts" that allows IIS to create unique identities for each of its Application Pools. Click here for more information about Virtual Accounts.

Configuring IIS Application Pool Identities
If you are running IIS 7.5 on Windows Server 2008 R2 you don't have to do anything. For every Application Pool you create the IIS Admin Process (WAS) will create a virtual account with the name of the new Application Pool and run the Application Pool's worker processes under this account. If you are running Windows Server 2008 you have to change the IdentityType property of the Application Pool you created to "AppPoolIdentity". Here is how:

  • Open the IIS Management Console (INETMGR.MSC).
  • Open the Application Pools node underneath the machine node. Select the Application Pool you want to change to run under an automatically generated Application Pool Identity.
  • Right click the Application Pool and select "Advanced Settings..."

Configure AppPool Identity

  • Select the "Identity" list item and click the button with the three dots.
  • The following dialog appears.

Select the Identity Type "ApplicationPoolIdentity" from the combo box

  • select the Identity Type "ApplicationPoolIdentity" from the combo box

Securing Resources
Whenever a new Application Pool is created the IIS management process creates a security identifier (SID) representing the name of the Application Pool itself, i.e. if you create an Application Pool with the name "MyNewAppPool" a security identifier with the name "MyNewAppPool" is created in the Windows Security system. From this point on resources can be secured using this identity. The identity is not a real user account however, i.e. it will not show up as a user in the Windows User Management Console. You can try this by selecting a file in Windows Explorer and adding the "DefaultAppPool" identity to its Access Control List (ACL).

  1. Open Windows Explorer
  2. Select a file or directory
  3. Right click the file and select "Properties"
  4. Select the "Security" tab
  5. Click the "Edit" and then "Add" button
  6. Click the "Locations" button and make sure you select your machine.
  7. Enter "IIS AppPool\DefaultAppPool" in the "Enter the object names to select:" text box
  8. Click the "Check Names" button and click "OK".

By doing this the file or directory you selected will now also allow the "DefaultAppPool" identity access. You can do this via the command-line using the ICACLS tool. The following example gives full access to the DefaultAppPool identity.

On Windows 7 and Windows Server 2008 R2 the default is to run Application Pools as this security identifier, i.e. as the Application Pool Identity. To make this happen a new identity type with the name "AppPoolIdentity" was introduced. If the "AppPoolIdentity" identity type is selected (default on Windows 7 and Windows Server 2008 R2) IIS will run worker processes as the Application Pool identity. With every other identity type the security identifier will only be injected into the access token of the process. If the identifier is injected content can still be ACLed for the AppPool identity but the owner of the token is probably not unique. Here is an article that explains this concept.

Accessing the Network
Using the NETWORKSERVICE account in a domain environment has a great benefit. Worker process running as NETWORKSERVICE access the network as the machine account. Machine accounts are generated when a machine is joined to a domain. They look like this: <domainname>\<machinename>$, for example: mydomain\machine1$ The nice thing about this is that network resources like file shares or SQL Server databases can be ACLed to allow access for this machine account.

What about AppPool identities?
The good news is that Application Pool identities also use the machine account to access network resources. No changes are required.



IIS 7.5 Hosting - ASPHostPortal :: GZIP compression in IIS 7.5 for JSON response

clock May 15, 2013 12:20 by author Ben

Recently we need to build some RESTful services that responded with JSON and to my surprise these responses are not compressed when I fired up Fiddler and check the Transformer tab.

After enabling Failed Request Tracing in IIS by going to IIS Manager > Web Site > Right Panel > Actions Configure > Failed Request Tracing.

On the same page with the Right Panel of "Actions", under the IIS area, click into Failed Request Tracing Rules.

From there, at the Right Panel, click "Add..." > Specify Content to Trace > All content(*) > Status code(s): type in "200" > Finish.

You should make sure no other request is hitting on the same server at the same time to minimize the logging file creation, which is based on request, if you only have one request when you turn this on, there should only be one fr000001.xml file created in %windir%\inetpub\logs\FailedReqLogFiles\W3SVC1, open that in Internat Explorer and click on Request Details > Search for Compression, expand the corresponding node > Mine says in the node dynamic compression > not success > no_matching_content_type.

So it seems that even though by default HTTP Compression is enabled in IIS 7.5, there are only 3 content types that IIS recognizes and will actually perform the compression.

You can check this in IIS Manager > YOUR_SERVER > Under Management section > Configuration Editor > Expand the system.webServer > httpCompression > dynamicTypes

When you have a RESTful service that serve JSON based response and you would like that to be gzip/compressed whenever client's request includes in the header "Accept-Encoding: gzip", you must update this list to include both mime types

application/json
application/json;
charset=utf-8


in the list, you can either start adding these 2 type in the editor itself or you can run a command line tool AppCmd.exe from %windir%\system32\inetsrv directory, the full command is

appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost

Remember to apply your settings if you are using the GUI editor, I used the command line tool because this is the only one that will add the type along with the "Entry Path", adding through the GUI editor for some reason doesn't add any Entry Type, it still works but I just don't like missing anything.

Now, restart IIS and you can check again in Fiddler and Viola! Now it says GZIP Encoding. You can "Click here to transform" to see the data :)



IIS 7.0 Hosting - ASPHostPortal :: Securing IIS 7.0 Web Server on Windows 2007

clock May 8, 2013 08:51 by author Ben

Hacking a Web Server
With the advent of Windows 2007 and IIS 7.0 there was a sharp turn in the way hosting services were being provided on Windows platform few years back. Today, web servers running on Internet Information Services 7.0 (IIS 7.0) are highly popular worldwide - thanks to the .NET and AJAX revolution for designing web applications. Unfortunately, this also makes IIS web servers a popular target amongst hacking groups and almost every day we read about the new exploits being traced out and patched. That does not mean that Windows is not as secured as Linux. In fact, it's good that we see so many patches being released for Windows platform as it clearly shows that the vulnerabilities have been identified and blocked.

Many server administrators have a hard time coping up with patch management on multiple servers thus making it easy for hackers to find a vulnerable web server on the Internet. One good way I have found to ensure servers are patched is to use Nagios to run an external script on a remote host, in turn alerting on the big screen which servers need patches and a reboot after the patch has been applied. In other words, it is not a difficult task for an intruder to gain access to a vulnerable server if the web server is not secured and then compromise it further to an extent that there is no option left for the administrator but to do a fresh OS install and restore from backups.
Many tools are available on the Internet which allows an experienced or a beginner hacker to identify an exploit and gain access to a web server. The most common of them are:

1. IPP (Internet Printing Protocol) - which makes use of the IPP buffer overflow. The hacking application sends out an actual string that overflows the stack and opens up a window to execute custom shell code. It connects the CMD.EXE file to a specified port on the attacker's side and the hacker is provided with a command shell and system access.

2. UNICODE and CGI-Decode - where the hacker uses the browser on his or her computer to run malicious scripts on the targeted server. The script is executed using the IUSR_ account also called the "anonymous account" in IIS. Using this type of scripts a directory transversal attack can be performed to gain further access to the system.

Over these years, I've seen that most of the time, attacks on a IIS web server result due to poor server administration, lack of patch management, bad configuration of security, etc. It is not the OS or the application to blame but the basic configuration of the server is the main culprit. I've outlined below a checklist with an explanation to each item. These if followed correctly would help prevent lot of web attacks on an IIS web server.

Secure the Operating System
The first step is to secure the operating system which runs the web server. Ensure that the Windows 2007 Server is running the latest service pack which includes a number of key security enhancements.

Always use NTFS File System
NTFS file system provides granular control over user permissions and lets you give users only access to what they absolutely need on a file or inside a folder.

Remove Unwanted Applications and Services
The more applications and services that you run on a server, the larger the attack surface for a potential intruder. For example, if you do not need File and Printer sharing capabilities on your shared hosting platform, disable that service.

Use Least Privileged Accounts for Service
Always use the local system account for starting services. By default Windows Server 2007 has reduced the need for service accounts in many instances, but they are still necessary for some third-party applications. Use local system accounts in this case rather than using a domain account. Using a local system account means you are containing a breach to a single server.

Rename Administrator and Disable Guest
Ensure that the default account called Guest is disabled even though this is a less privileged account. Moreover, the Administrator account is the favorite targets for hackers and most of the malicious scripts out there use this to exploit and vulnerable server. Rename the administrator account to something else so that the scripts or programs that have a check for these accounts hard-coded fail.

Disable NetBIOS over TCP/IP and SMB
NetBIOS is a broadcast-based, non-routable and insecure protocol, and it scales poorly mostly because it was designed with a flat namespace. Web servers and Domain Name System (DNS) servers do not require NetBIOS and Server Message Block (SMB). This protocol should be disabled to reduce the threat of user enumeration.

To disable NetBIOS over TCP/IP, right click the network connection facing the Internet and select Properties. Open the Advanced TCP/IP settings and go to the WINS tab. The option for disabling NetBIOS TCP/IP should be visible now.

To disable SMB, simply uncheck the File and Print Sharing for Microsoft Networks and Client for Microsoft Networks. A word of caution though - if you are using network shares to store content skip this. Only perform this if you are sure that your Web Server is a stand-alone server.

Schedule Patch Management
Make a plan for patch management and stick to it. Subscribe to Microsoft Security Notification Service (http://www.microsoft.com/technet/security/bulletin/notify.asp) to stay updated on the latest release of patches and updates from Microsoft. Configure your server's Automatic Update to notify you on availability of new patches if you would like to review them before installation.

Run MBSA Scan
This is one of the best way to identify security issues on your servers. Download the Microsoft Base Line Security tool and run it on the server. It will give you details of security issues with user accounts, permissions, missing patches and updates and much more.

That's it to the basic of securing the operating system. There are more fixes which can be performed for further securing the server but they are beyond the scope of this article. Let's now move on to securing the IIS web server.

IIS 7.0 when setup is secured by default. When we say this, it means that when a fresh installation of IIS is done, it prevents scripts from running on the web server unless specified. When IIS is first installed, it serves only HTML pages and all dynamic content is blocked by default. This means that the web server will not serve or parse dynamic pages like ASP, ASP.NET, etc. Since that is not what a web server is meant to do, the default configuration is changed to allow these extensions. Listed below are some basic points that guide you to securing the web server further:

Latest Patches and Updates
Ensure that the latest patches, updates and service packs have been installed for .NET Framework. These patches and updates fix lot of issues which enhances the security of the web server.

Isolate Operating System
Do not run your web server from the default InetPub folder. If you have the option to partition your hard disks then use the C: drive for Operating System files and store all your client web sites on another partition. Relocate web root directories and virtual directories to a non-system partition to help protect against directory traversal attacks.

IISLockDown Tool
There are some benefits to this tool and there are some drawbacks, however, so use it cautiously. If your web server interacts with other servers, test the lockdown tool to make sure it is configured so that connectivity to backend services is not lost.

Permissions for Web Content
Ensure that Script Source Access is never enabled under a web site's property. If this option is enabled, users can access source files. If Read is selected, source can be read; if Write is selected, source can be written to. To ensure that it is disabled, open IIS, right click the Websites folder and select Properties. Clear the check box if it is enabled and propagate it to all child websites.

Enable Only Required Web Server Extensions
IIS 7.0 by default does not allow any dynamic content to be parsed. To allow a dynamic page to be executed, you need to enable the relevant extension from the Web Service Extensions property page. Always ensure that "All Unknown CGI Extensions" and "All Unknown ISAPI Extensions" are disabled all the time. If WebDAV and Internet Data Connector are not required, disable that too.

Disable Parent Paths
This is the worst of all and thanks to Microsoft, it is disabled in IIS 7.0 by default. The Parent Paths option permits programmers to use ".." in calls to functions by allowing paths that are relative to the current directory using the ..notation. Setting this property to True may constitute a security risk because an include path can access critical or confidential files outside the root directory of the application. Since most of the programmers and third-party readymade applications use this notation, I leave it up to you to decide if this needs to be enabled or disabled. The workaround to Parent Paths is to use the Server.MapPath option in your dynamic scripts.

Disable Default Web Site
If not required, stop the Default Web Site which is created when IIS 7.0 is installed or change the property of Default Web Site to run on a specific IP address along with a Host Header. Never keep it running on All Unassigned as most of the ready-made hacking packages identify a vulnerable web server from IP address rather than a domain name. If your Default Web Site is running on All Unassigned, it means that it can serve content over an IP address in the URL rather than the domain name.

Use Application Isolation
I like this feature in IIS 7.0 which allows you to isolate applications in application pools. By creating new application pools and assigning web sites and applications to them, you can make your server more efficient and reliable as it ensures that other applications or sites do not get affected due to a faulty application running under that pool.

Summary
All of the aforementioned IIS tips and tools are natively available in Windows. Don't forget to try just one at a time before you test your Web accessibility. It could be disastrous if all of these were implemented at the same time making you wonder what is causing a problem in case you start having issues.

Final tip: Go to your Web server and Run "netstat -an" (without quotes) at the command line. Observe how many different IP addresses are trying to gain connectivity to your machine, mostly via port 80. If you see that you have IP addresses established at a number of higher ports, then you've already got a bit of investigating to do.



WebMatrix Hosting – ASPHostPortal.com :: Quick Start screen

clock May 8, 2013 08:05 by author Ben

Microsoft has just introduced the new version of WebMatrix. When this post is written it’s still in BETA version.

if you were ASP.NET developer when it’s in 1.1 version, you might ever tried WebMatrix, it was free as it’s today. At that time there was no Express edition of Visual Studio and WebMatrix was the free tool to develop ASP.NET website if you didn’t want to buy Visual Studio.

What is this?
WebMatrix is everything you need to build Web sites using Windows. It includes IIS Developer Express (a development Web server), ASP.NET (a Web framework), and SQL Server Compact (an embedded database). It streamlines Web site development and makes it easy to start Web sites from popular open-source apps. The skills and code you develop with WebMatrix transition seamlessly to Visual Studio and SQL Server.

Why Use It?
You will use the same powerful Web server, database engine and web framework that will run your Web site on the Internet, which makes the transition from development to product seamless. Beyond ensuring everything just works, WebMatrix includes new features that make Web development easier.

Who’s it for?
WebMatrix is for developers, students, or just about anyone who just wants a small and simple way to build Web sites. Start coding, testing, and deploying your own Web sites without having to worry about configuring your own Web server, managing databases, or learning a lot of concepts. WebMatrix makes Web site development easy.

Code Without Boundaries
WebMatrix provides an easy way to get started with Web development. With an integrated code editor and a database editor, Web site and server management, search optimization, FTP publishing, and more, WebMatrix provides a fresh, new Web site development experience that seamlessly bridges all the key components you need in order to create, run, and deploy a Web site.

As your needs grow, the code and skills you develop can seamlessly transition to Visual Studio – Microsoft’s premier development suite.

WebMatrix – Quick Start screen
On first-time run of Microsoft WebMatrix, below screen will be the default screen on WebMatrix.

if you don’t like it you can disable it by giving “Do not show this screen on start-up” a check mark. It will then run My Sites as default screen and open the last site you’re working with.

Later you want the Quick Start screen to get back you can close the opened site it will bring you the Quick Start screen and you can remove the check mark if you want.

Okay let’s try the menu item one by one.

My Sites
It will open a dialog with all sites you ever worked with in the list. Select a site and click OK button to open, or you can just double click the site.

I found that there is no way to remove a site from the list at this Beta version. Maybe it will be added in the newer version… let’s hope… err suggest to them.

Site From Web Gallery
This option will open a dialog to select a website or application from Web Gallery. This is the same as installing community website or application using Web PI but in this Microsoft WebMatrix the website or application will not be installed in IIS but rather they will be stored in %user%\Documents\My Web Sites folder and will be run in IIS Developer Express by default when you’re working with the site on WebMatrix. (if you want you can also add the website or application to your computer IIS manually)

Site From Template
This is where all funs will start. You can create your own site from online templates listed there.

When you create a new site from template, you will find that there are new extensions .cshtml and .vbhtml – yep they are new extension introduced by Microsoft as ASP.NET Web Page and  it supports Razor Syntax.



$50.00 Cheap and Reliable Windows Cloud Server with ASPHostPortal.com

clock May 6, 2013 11:43 by author Jervis

ASPHostPortal.com is Microsoft No #1 Recommended Windows and ASP.NET Spotlight Hosting Partner in United States. Microsoft presents this award to ASPHostPortal.com for ability to support the latest Microsoft and ASP.NET technology, such as: WebMatrix, WebDeploy, Visual Studio 2012, ASP.NET 4.5, ASP.NET MVC 4.0, Silverlight 5 and Visual Studio Lightswitch. Click here for details. Now, ASPHostPortal.com proudly announces the most affordable, reliable and cheap Windows Cloud Server on our hosting environment.

 

Microsoft literally built in to Windows Server 2012 hundreds of enhancements and new features. Below are several points highlighted by Microsoft:

  1. Powerful. It delivers the power of many servers, with the simplicity of one. Windows Server 2012 offers you excellent economics by providing an integrated, highly available, and easy-to-manage multiple-server platform.
  2. Continuous availability. New and improved features offer cost-effective IT service with high levels of uptime. Servers are designed to endure failures without disrupting services to users.
  3. Open. Windows Server 2012 enables business-critical applications and enhances support for open standards, open source applications, and various development languages.
  4. Flexible. Windows Server 2012 enables symmetrical or hybrid applications across the data center and the cloud. You can build applications that use distributed and temporally decoupled components.

From just $50.00/month, you can get a dedicated cloud server with the following specifications:

- Windows 2008 R2 / Windows 2012 Standard Edition OS
- 1 Core Processor
- 1 GB RAM
- 100 GB SAN Storage
- 1000 GB Bandwidth
- 100 Mbps Connection Speed
- Full 24/7 RDP Access
- Full 24/7 Windows Firewall Protection
- Choice of United States, European (Amsterdam), and Asia (Singapore) Data Center

“As a leader in Windows Hosting, and having worked closely with Microsoft to reach this point, we’re moving forward with expectant anticipation for serving our loyal clients with Windows Cloud Server Solutions.” Said Dean Thomas, Manager at ASPHostPortal.com.

For more details, please visit http://www.asphostportal.com/Windows-Cloud-Server-Hosting-Plans.aspx.

About ASPHostPortal.com:


ASPHostPortal.com is a hosting company that best support in Windows and ASP.NET-based hosting. Services include shared hosting, reseller hosting, and sharepoint hosting, with specialty in ASP.NET, SQL Server, and architecting highly scalable solutions. As a leading small to mid-sized business web hosting provider, ASPHostPortal strive to offer the most technologically advanced hosting solutions available to all customers across the world. Security, reliability, and performance are at the core of hosting operations to ensure each site and/or application hosted is highly secured and performs at optimum level.



ASP.NET MVC 4 Hosting - ASPHostPortal :: How to Customize oAuth Login UI in ASP.NET 4.5 and MVC 4

clock May 3, 2013 08:32 by author Jervis

In this quick post, we will see how to customize, oAuth Login UI in ASP.NET 4.5 and MVC 4.

One of the common requirements with oAuth login is that displaying respective provider’s logo or image.

In ASP.NET 4.5 and MVC 4, we can register oAuth provider in App_Start/AuthConfig.cs file and here we can also pass additional data to oAuth provider if any. We can use this extra data dictionary to pass oAuth provider’s image url to view and based on it we can display image for respective provider.

As we can see in above code snippet that we are passing Icon url with extra data. So we can now access this Icon url from view. Open Views/Account/_ExternalLoginsListPartial.cshtml and replace the classic button markup with below code snippet.

Now run the application by pressing F5 and wow we have brand new login UI for oAuth provider.

ASP.NET MVC 4 Hosting start from $3.00/month. Check it out for more information!!

 



About ASPHostPortal.com

We’re a company that works differently to most. Value is what we output and help our customers achieve, not how much money we put in the bank. It’s not because we are altruistic. It’s based on an even simpler principle. "Do good things, and good things will come to you".

Success for us is something that is continually experienced, not something that is reached. For us it is all about the experience – more than the journey. Life is a continual experience. We see the Internet as being an incredible amplifier to the experience of life for all of us. It can help humanity come together to explode in knowledge exploration and discussion. It is continual enlightenment of new ideas, experiences, and passions

 photo ahp banner aspnet-01_zps87l92lcl.png

Author Link

Corporate Address (Location)

ASPHostPortal
170 W 56th Street, Suite 121
New York, NY 10019
United States

Sign in