Windows 2012 Hosting - MVC 6 and SQL 2014 BLOG

Tutorial and Articles about Windows Hosting, SQL Hosting, MVC Hosting, and Silverlight Hosting

Windows Server 2012 Hosting - ASPHostPortal.com :: Import IP Address with IPAM into Windows Server 2012

clock November 21, 2013 07:17 by author Ben

IPAM leverages an intuitive point- and-click web interface to allow you to Easily investigate the IP address space issues . By scanning the network , IPAM maintains a dynamic list of IP addresses and Allows engineers to plan for network growth , Ensure IP space usage meets corporate standards , and reduce IP conflicts .

IPAM is an entirely new feature in Windows Server 2012 that provides highly customizable administrative and monitoring capabilities for the IP address on a corporate network infrastructure or IPAM is the new framework for finding , monitoring and managing it on a network .

IPAM is a feature of Windows Server 2012 and must be installed as such , either by using the Add Roles and Features Wizard or through PowerShell 3.0 and poorly documented in my opinion , making a useful feature harder to use and understand than it should be .

Even without formal organization IPAM applications keep track of their IP address information somehow - most typically in spreadsheets . IPAM lets you view IP address availability and configuration from a database perspective , enabling you to use your addresses more efficiently . IPAM features such as IP reconciliation and automation can Eliminate the need to use spreadsheets for tracking addresses .

IPAM is performed on a Microsoft network by an installable feature of Windows Server 2012 that you run on a domain member server to " watch and centrally manage " the other servers on your network that are actually doing the work . IPAM manages the functionality of the following Windows servers :

  • DHCP Service
  • DNS Server
  • Network Policy Server ( NPS )
  • Active Directory Domain Controller ( DC )


To do import IPAM , IPAM log on to your server and open Server Manager :

  • Click IPAM in far left pane of Server Manager .
  • In the IPAM client , select IP Address Blocks under IP ADDRESS SPACE , and the make sure that Current view is set to IP Addresses in the drop - down menu .
  • If you look along the top window , you will see the IP address listed along the top fields , such as IP Address and IP Address State . You can add or remove fields by right-clicking on one of the existing fields .


If you want to import and Assignment Type yhis information , you need to add these fields to the first line of your import file , without spaces , as shown below :

IPAddress , IPAddressState , AssignmentType , ManagedByService , ServiceInstance , AssetTag

10.160.50.12 , In - Use , Static , IPAM , localhost , BR12

10.160.50.13 , In - Use , Static , IPAM , localhost , BR13

10.160.50.14 , In - Use , Static , IPAM , localhost , BR14

10.160.50.15 , In - Use , Static , IPAM , localhost , BR15

Alternatively , you can keep the spaces in the field names and enclose with quotation marks , for example , " IP Address" and " IP Address State " . The actual IP address of the data should then follow , comma delimited in the same order that you specified the fields as shown above .

Some fields , such as IP Address State , will require you to look and see what the options are valid input . To find out what the possible options are :

Click Tasks in the far right corner of this client and select Add IP Address from the menu .

Select the drop - down menu by the side of the field to see the possible options. For instance , the field can be set to In - Use , Inactive or Reserved .

Now we need to change the status of any discovered servers to Managed . To do this , right click a server in the Server Inventory screen and select Edit Server from the menu .

In the Add or Edit Server window , change the Manageability status to Managed and click OK . Right click the server again , and select Retrieve All Data Server from the menu . Repeat this procedure for all discovered servers . Now you are ready to add the IP addresses , ranges and blocks to IPAM .



Windows 2012 Hosting :: Hyper-v 3.0 Network Virtualization on Windows 2012

clock August 28, 2013 10:21 by author Mike

Windows Server introduces a slew of new technologies. These technologies enable Windows Server systems and virtual environments to meet all manner of new requirements and scenarios, including private and public cloud implementations. Often, this type of scenario involves a single infrastructure that's shared by different business units or even different organizations. 

In this article, I want to describe Network virtualization. Other great capabilities include a new site-to-site VPN solution; huge enhancements to the Server Message Block (SMB) protocol, enabling VMs to run from a Server 8 file share; native NIC teaming; and consistent device naming. But I want to focus on the major network technologies that most affect virtualization.

Virtualization has always striven to abstract one resource layer from another, giving improved functionality and portability. But networking hasn't embraced this goal, and VMs are tied to the networking configuration on the host that runs them. Microsoft System Center Virtual Machine Manager (VMM) 2012 tries to link VMs to physical networks through its logical networks feature, which lets you create logical networks such as Development, Production, and Backup. You can then create IP subnets and virtual LANs (VLANs) for each physical location that has a connection to a logical network. This capability lets you create VMs that automatically connect to the Production network, for example; VMM works out the actual Hyper-V switch that should be used and the IP scheme and VLAN tag, based on the actual location to which the VM is deployed.

This feature is great. But it still doesn't help in scenarios in which I might be hosting multiple tenants that require their own IP schemes, or even one tenant that requires VMs to move between different locations or between private and public clouds, without changing IP addresses or policies that relate to the network. Typically, public cloud providers require clients to use the hosted IP scheme, which is an issue for flexible migration between on-premises and off-premises hosting.

Both these scenarios require the network to be virtualized, and the virtual network must believe that it wholly owns the network fabric, in the same way that a VM believes it owns the hardware on which it runs. VMs don't see other VMs, and virtual networks shouldn't see or care about other virtual networks on the same physical fabric, even when they have overlapping IP schemes. Network isolation is a crucial part of network virtualization, especially when you consider hosted scenarios. If I'm hosting Pepsi and Coca-Cola on the same physical infrastructure, I need to be very sure that they can't see each other's virtual networks. They need complete network isolation.

This virtual network capability is enabled through the use of two IP addresses for each VM and a virtual subnet identifier that indicates the virtual network to which a particular VM belongs. The first IP address is the standard address that's configured within the VM and is referred to as the customer address (using IEEE terms). The second IP address is the address that the VM communicates over the physical network and is known as the provider address.

In the example that Figure 1 shows, we have one physical fabric. Running on that fabric are two separate organizations: red and blue. Each organization has its own IP scheme, which can overlap, and the virtual networks can span multiple physical locations. Each VM that is part of the virtual red or blue network has its own customer address. A separate provider address is used to send the actual IP traffic over the physical fabric.

Figure 1: Virtual networking example


Figure 1: Virtual networking example 

You can see that the physical fabric has the network and compute resources and that multiple VMs run across the hosts and sites. The color of the VM coordinates with its virtual network (red or blue). Even though the VMs are distributed across hosts and locations, the hosts in the virtual networks are completely isolated from the other virtual networks with their own IP schemes.

Two solutions-IP rewrite and Generic Routing Encapsulation (GRE)-enable network virtualization in Server 8. Both solutions allow completely separate virtual networks with their own IP schemes (which can overlap) to run over one shared fabric.

IP rewrite. The first option is IP rewrite, which does exactly what the name suggests. Each VM has two IP addresses: a customer address, which is configured within the VM, and a provider address, which is used for the actual packet transmission over the network. The Hyper-V switch looks at the traffic that the VM is sending out, looks at the virtual subnet ID to identify the correct virtual network, and rewrites the IP address source and target from the customer addresses to the corresponding provider addresses. This approach requires many IP addresses from the provider address pool because every VM needs its own provider address. The good news is that because the IP packet isn't being modified (apart from the address), hardware offloads such as virtual machine queue (VMQ), checksum, and receive-side scaling (RSS) continue to function. IP rewrite adds very little overhead to the network process and gives very high performance.

Figure 2 shows the IP rewrite process, along with the mapping table that the Hyper-V host maintains. The Hyper-V host maintains the mapping of customer-to-provider addresses, each of which is unique for each VM. The source and destination IP addresses of the original packet are changed as the packet is sent via the Hyper-V switch. The arrows in the figure show the flow of IP traffic.

Figure 2: IP rewrite process


Figure 2: IP rewrite process 

GRE. The second option is GRE, an Internet Engineering Task Force (IETF) standard. GRE wraps the originating packet, which uses the customer addresses, inside a packet that can be routed on the physical network by using the provider address and that includes the actual virtual subnet ID. Because the virtual subnet ID is included in the wrapper packet, VMs don't require their own provider addresses. The receiving host can identify the targeted VM based on the target customer address within the original packet and the virtual subnet ID in the wrapper packet. All the Hyper-V host on the originating VM needs to know is which Hyper-V host is running the target VM and can send the packet over the network.

The use of a shared provider address means that far fewer IP addresses from the provider IP pools are needed. This is good news for IP management and the network infrastructure. However, there is a downside, at least as of this writing. Because the original packet is wrapped inside the GRE packet, any kind of NIC offloading will break. The offloads won't understand the new packet format. The good news is that many major hardware manufacturers are in the process of adding support for GRE to all their network equipment, enabling offloading even when GRE is used.

Figure 3 shows the GRE process. The Hyper-V host still maintains the mapping of customer-to-provider address, but this time the provider address is per Hyper-V host virtual switch. The original packet is unchanged. Rather, the packet is wrapped in the GRE packet as it passes through the Hyper-V switch, which includes the correct source and destination provider addresses in addition to the virtual subnet ID.


Figure 3: GRE 

In both technologies, virtualization policies are used between all the Hyper-V hosts that participate in a specific virtual network. These policies enable the routing of the customer address across the physical fabric and track the customer-to-provider address mapping. The virtualization policies can also define the virtual networks that are allowed to communicate with other virtual networks. The virtualization policies can be configured by using Windows PowerShell, which is a common direction for Server 8. This makes sense: When you consider massive scale and automation, the current GUI really isn't sufficient. The challenge when using native PowerShell commands is the synchronous orchestration of the virtual-network configuration across all participating Hyper-V hosts.

Both options sound great, but which one should you use? GRE should be the network virtualization technology of choice because it's faster than IP rewrite. The network hardware supports GRE, which is important because otherwise GRE would break offloading, and software would need to perform offloading, which would be very slow. Also, because of the reduced provider address requirements, GRE places fewer burdens on the network infrastructure. However, until the networking equipment supports GRE, you should use IP rewrite, which requires no changes on the network infrastructure equipment.

 



About ASPHostPortal.com

We’re a company that works differently to most. Value is what we output and help our customers achieve, not how much money we put in the bank. It’s not because we are altruistic. It’s based on an even simpler principle. "Do good things, and good things will come to you".

Success for us is something that is continually experienced, not something that is reached. For us it is all about the experience – more than the journey. Life is a continual experience. We see the Internet as being an incredible amplifier to the experience of life for all of us. It can help humanity come together to explode in knowledge exploration and discussion. It is continual enlightenment of new ideas, experiences, and passions


Author Link


Corporate Address (Location)

ASPHostPortal
170 W 56th Street, Suite 121
New York, NY 10019
United States

Sign in