loader image

Introducing Windows Server Free eBook from Microsoft Press PDF, EPUB and MOBI – ITProGuru Blog

Looking for:

Windows server 2012 r2 standard download pdf ebook free download

Click here to Download

For example, if your client uses consistently small iles, you might want to set the allocation unit size to a smaller cluster size.

Windows Server – Free – PDF Drive

You can, however, do this by using Windows PowerShell. Incorrect: The inclusion of additional cmdlets in Windows PowerShell 3. The new Windows PowerShell 3. Policy caching works by having the client download the policy from a domain controller and save a copy of the policy to a local store on the client. One simple but often imprac- tical option is to let each print client function as its own print server, as shown in Figure Using SMB 3.


[Windows server 2012 r2 standard download pdf ebook free download


You received member points. Thanks for at least posting it seems like it would help out. Hi Jeff, thanks for reporting, if you remember any links or sections where there was a missing article it would be appreciated. In the meantime I will do a general check.

I have completed wiki document update and removed some missing PDF links. Thanks again for your post and let me know for any further missing link. Your email address will not be published. Notify me of followup comments via e-mail. You can also subscribe without commenting. Receive new post notifications. Please ask IT administration questions in the forums. Any other messages are welcome.

Receive news updates via email from this site. Toggle navigation. All Docs. Read History. Author Recent Posts. Paolo Maffezzoli.

IT systems administrator server infrastructures Windows, VMware. Physical storage layer Depending on the performance needs of the workloads running in the solution, the virtual machine files for the Hyper-V clusters can be stored on different types of storage devices. While this kind of cloud solution is already possible using Windows Server , the enhancements to virtualization, storage, and networking in Windows Server R2 now make it possible to optimize such solutions to achieve enterprise-quality levels of performance, reliability, and availability.

Enabling the solution using System Center R2 Windows Server R2 only represents the foundation for a cloud solution. What truly makes this new release of Windows Server a cloud-optimized operating system is that it represents the first time that Microsoft has synchronized the product release cycles of Windows Server and System Center.

The goal of doing this is to ensure that Microsoft can deliver to both its enterprise and service provider customers a completely integrated solution for building and deploying both private and hosted clouds. To achieve this goal, the R2 release of System Center particularly in VMM R2 also includes numerous enhancements, particularly in the areas of storage performance, provisioning, and management. Although the focus of this book is on Windows Server R2 and its new features and enhancements, System Center R2 and particularly VMM should really be considered the default platform going forward for managing a cloud solution built using Windows Server R2 as its foundation.

Some examples of different types of storage providers and arrays you can manage using SM-API include:. A new architecture that performs enumerations of storage resources 10 times faster than previously. Support for new Storage Spaces features like write-back caching and storage tiering which are described later in this chapter.

Fibre Channel SANs can be prohibitively expensive for a small or midsized business because they require specialized connection hardware such as HBAs and cabling. By contrast, iSCSI needs no specialized connection hardware or special-purpose cabling because it can use a standard Ethernet network for connecting servers with the storage array.

Using these new features, organizations can deploy iSCSI storage without the need of purchasing any additional storage hardware or software.

In a typical iSCSI storage scenario, an iSCSI initiator a service running on the server consuming the storage establishes a session consisting of one or more TCP connections with an iSCSI target an object on the target server that allows an iSCSI initiator to establish a connection with the target server in order to access an iSCSI virtual disk storage backed by a virtual hard disk file on the iSCSI Target Server the server or device, such as a SAN, that shares storage so that users or applications running on a different server can consume the storage.

Using SMB 3. Some of the key features of SMB 3. This results in improved utilization of network bandwidth, load balancing of SMB 3. If a hardware or software failure occurs on a cluster node, SMB 3. SMB Multichannel Provides aggregation of network bandwidth and network fault tolerance when multiple paths are available between the SMB 3. This results in server applications taking full advantage of all available network bandwidth and being more resilient to network failure.

Encryption can be configured on a per share basis or for the entire SMB 3. With the release of version 3. A storage protocol that enables Hyper-V hosts to access and run virtual machine files stored on file servers on the network.

A protocol transport for performing live migrations of virtual machines between clustered and nonclustered Hyper-V hosts. A number of improvements have been made to SMB 3. Because SMB now has so many different functions in a network and storage infrastructure built using Windows Server , it now represents a common infrastructure component in many environments.

As Figure shows, you can now configure bandwidth limits for different categories of SMB usage. For example, the figure shows that three categories of bandwidth limits have been configured to ensure optimal performance of the various infrastructure components present in this infrastructure:. Data deduplication enhancements Data deduplication was introduced in Windows Server to help enterprises cope with exponentially increasing growth of data storage in their environments.

Data deduplication allows Windows Server to store more data in less physical space to optimize the capacity of their storage fabric. Data deduplication is highly scalable, resource efficient, and nonintrusive in Windows Server , and can run on multiple volumes simultaneously without affecting other workloads running on the server.

Checksums, consistency, and identity validation are used to ensure data integrity, and redundant copies of file system metadata are maintained to ensure data is recoverable in the event of corruption. Windows Server R2 includes several important improvements to the way data deduplication works.

For example, in the previous version deduplication could only be used with files that are closed, such as virtual machine files stored in the VMM library. In order for this scenario to work, SMB 3.

This type of space savings can be especially beneficial for virtual disk infrastructure VDI environments running on Windows Server R2 Hyper-V hosts. Deduplication is supported only for data volumes, however, and not for boot or system volumes.

In addition, ReFS volumes do not support using deduplication. Storage Spaces enhancements Until Windows Server was released, implementing storage virtualization required purchasing proprietary third-party SAN solutions that were expensive and required using their own set of management tools. Such solutions also required special training to implement, manage, and maintain them effectively. Storage Spaces, first introduced in Windows Server , was designed to make storage virtualization affordable even for small businesses.

Some of the benefits of using Storage Spaces include:. Increased scalability Additional physical storage can easily be added and used to meet increasing business demands. Increased flexibility New storage pools can be created and existing ones expanded as the need arises.

Increased efficiency Unused storage capacity can be reclaimed to enable more efficient use of existing physical storage resources. Increased elasticity Storage capacity can be preallocated by using thin provisioning to meet growing demand even when the underlying physical storage is insufficient.

Lower cost Low-cost, commodity-based storage devices can be used to save IT departments money that can be better allocated elsewhere. To understand how Storage Spaces might be used for private cloud solutions, Figure compares a traditional SAN-based storage solution with one built using Storage Spaces in Windows Server By comparison, using Storage Spaces on the right requires no use of proprietary technology.

Instead, all of the components of this solution can use off-the-shelf commodity-based server hardware. Connectivity between the Hyper-V host cluster requires only a standard Ethernet network typically 10 GbE and high-performance network interface cards installed in the Hyper-V hosts and uses SMB 3. Storage Spaces in Windows Server The reaction when Storage Spaces was announced was somewhat qualified, especially by large enterprises that run workloads requiring the highest levels of performance involving millions of IOPS and massive throughput.

The reason for this was because it was initially assumed that the performance of a virtualized storage solution based on Storage Spaces would fall short of what a typical SAN array can deliver. However, Microsoft soon proved its critics wrong with a demonstration performed at TechEd , where a three-node high-performance server cluster was connected to a bay JBOD filled with enterprise-grade SSDs.

Clearly, Storage Spaces is an enterprise-ready storage virtualization technology and its usage scenarios are not limited only to smaller deployments. The challenge, however, with the Windows Server version of Storage Spaces is deciding whether you want to optimize performance or storage capacity when building your storage virtualization solution.

For example, if you use Storage Spaces to create storage pools backed by low-cost, large-capacity commodity HDDs, you get a capacity-optimized storage solution but the performance might not be at the level that some of your applications require. This is typically because large-capacity HDDs are optimized for sequential data access while many server applications perform best with random data access. The logical solution is to use a mix of low-cost, large-capacity commodity HDDs together with expensive, high-performance enterprise-class SSDs.

Building a Storage Spaces solution along these lines can provide you with the best of both worlds, and deliver high levels of IOPS at a relatively low cost compared to using a SAN.

Capacity-optimized approach Uses only low-cost, large-capacity commodity HDDs to provide high capacity while minimizing cost per terabyte. Performance-optimized approach Uses only expensive, high-performance enterprise-class SSDs to provide extreme performance, high throughput, and the largest number of IOPS per dollar. This is because while most enterprise workloads have a relatively large data set, the majority of data in this working set is often cold seldom-accessed data.

Only a minority of data is typically in active use at a given time, and this hot data can be considered the working set for such workloads. Naturally, this working set also changes over time for the typical server workload. Since the working set is small, it would seem natural to place the hot data the working set on high-performance SSDs while Storage Spaces in Windows Server R2 As Figure shows, the Windows Server R2 version of Storage Spaces now allows you to create a tiered storage solution that transparently delivers an appropriate balance between capacity and performance that can meet the needs of enterprise workloads.

How does Storage Spaces accomplish this? By having the file system actively measure the activity of the workload in the background and then automatically and transparently move data to the appropriate tier SSD or HDD depending on how hot or cold the data is determined to be. If a portion of the data for a particular file becomes hotter is accessed more frequently , then it gets moved from the HDD tier to the SSD tier. And if the portion of data becomes cooler is accessed less frequently , then it gets moved from the SSD tier to the HDD tier.

This seamless movement of data between tiers is configured by default to happen daily in 1 MB chunks, but you also have the option of configuring the scheduled task for this operation to run as frequently as you want. Data moves between tiers in the background and has minimal impact on the performance of the storage space.

If needed, you can use Windows PowerShell to assign certain files to a specific tier, thereby overriding the automatic placement of data based on heat. For example, the parent virtual hard disk file for a collection of pooled virtual machines in a VDI environment might be assigned to the SSD tier to ensure the file always remains pinned to this tier.

The result of doing this can be to achieve significant improvements in the boot times of the hundreds or thousands of virtual desktops derived from this parent virtual hard disk.

While the goal of tiering is to balance capacity against performance, the purpose of write-back caching is to smooth out short-term bursts of random writes. Write-back caching integrates seamlessly into tiered volumes and is enabled by default.

The write-back cache is located on the SSD tier of a storage space and services smaller, random writes; larger, sequential writes are serviced by the HHD tier. You can also enable write-back caching on nontiered volumes.

It can also allow you to set triggers to send notifications when a specified minimum IOPS is not met for a virtual disk. Configuring different service-level agreements SLAs for different types of storage operations within your infrastructure.

For example, a hoster might use this feature to configure Bronze, Silver, and Gold SLAs for storage performance available for different classes of tenants. You can even set alerts that trigger when virtual machines are not getting enough IOPS for storage access.

Restricting the disk throughput for overactive or disruptive virtual machines within your environment that is saturating the storage array. As Figure shows, Storage QoS can even be configured while the virtual machine is running. This allows organizations to have a lot of flexibility in how they manage access to the storage fabric from workloads running in their cloud environments. Windows Server R2 also includes other storage enhancements.

For example, enhancements to CSV in failover clustering now results in a more highly optimized rebalancing of SoFS traffic. Failover Clustering Both virtualization and storage can only take you so far unless you also add high availability into the mix. Failover Clustering, a key feature of the Windows Server platform, is designed to do just that by providing high availability and scalability to many types of server workloads including Hyper-V hosts, file servers, and different server applications such as Microsoft SQL Server and Microsoft Exchange Server that can run on both physical servers and virtual machines.

While Windows Server included a number of important enhancements to the Failover Clustering feature, Windows Server R2 adds even more. Improved scalability Compared with Failover Clustering in Windows Server R2, the number of cluster nodes supported increased from 16 to 64 in Windows Server The number of clustered roles or virtual machines also increased from 1, to 8, up to 1, per node in the new platform.

This increased scalability enabled new scenarios and efficiencies to help IT departments deliver more for the dollar. Other CSV improvements in Windows Server included support for BitLocker Drive Encryption, removal of external authentication dependencies, and improved file backup. Updating failover cluster nodes Cluster-Aware Updating CAU was introduced in Windows Server to enable software updates to be applied automatically to the host operating system or other system components on the nodes of a failover cluster while maintaining availability during the update process.

CAU reduced maintenance time by automating what was previously a very repetitive task. Quorum improvements New features of the cluster quorum feature in Windows Server included simplified quorum configuration, support for specifying which cluster nodes had votes in determining quorum, and dynamic quorum, which provides administrator the ability to automatically manage the quorum vote assignment for a node based on the state of the node.

Other enhancements Some of the many other enhancements to Failover Clustering in Windows Server included simplified migration of the configuration settings of clustered roles, more robust integration with Active Directory Domain Services, improved cluster validation tests, improved Windows PowerShell support, Node Maintenance Mode, clustered tasks, new clustered roles like iSCSI Target, guest clusters using virtual Fibre Channel, and more.

Many of the Hyper-V enhancements in Windows Server are also relevant to Failover Clustering, for example virtual machine prioritization, pre-emption to shut down low-priority virtual machines, virtual machine health monitoring, Hyper-V Replica Broker, and so on. This new exciting capability will be especially appreciated by hosters who want to maintain separation between their own storage infrastructure and that of their tenants.

Hosting highly available workloads Consider your typical hoster for a moment. A hoster provides its customer with services that allow them to run their virtual machines in the cloud instead of on location at their company premises. These virtual machines are pretty important to the customers too, since they are typically running server workloads—like SQL Server—that are critical to the operation of the.

First, the customer has been using host clustering, which means running the Failover Clustering feature in the parent partition of two or more Hyper-V hosts. To understand host clustering, think of making a single virtual machine highly available. Using host clustering helps you ensure continuous availability in the event of a hardware failure on a host, when you need to apply software updates to the parent partition resulting in a reboot being required, and similar scenarios.

Second, the customer has been using guest clustering, which means running the Failover Clustering feature in the guest operating system of two or more virtual machines. To understand guest clustering, think of multiple virtual machines in a failover cluster. Using guest clustering helps you proactively monitor application health and mobility within the guest operating system and protect against application failures, guest operating system problems, host and guest networking issues, and other problem scenarios.

By combining both types of failover clustering like this, the customer has the best of both worlds. In other words, having a cluster of clusters guest clustering on top of host clustering gives you the highest level of availability for your virtualized workloads. Separating virtual resources from physical infrastructure The new guest clustering using shared virtual disks capability of Windows Server R2 now makes such a scenario possible for hosters.

This approach can benefit hosters since it allows them to maintain strict control over the physical infrastructure of their cloud while providing great flexibility in how they deliver virtual resources to customers. Specifically, it allows them to provision virtual machines, services, and applications to customers together with the virtual compute, storage, and network resources needed while keeping the underlying physical resources opaque to them.

All the customer cares about. The hoster should be able to reallocate, reconfigure, and upgrade their physical infrastructure without interrupting customer workloads or even the customers being aware of it. Windows Server made guest clustering easier in two ways.

But the problem with guest clustering in Windows Server is that it still requires that something in your virtual infrastructure the guest operating system of the clustered virtual machines needs to be able to directly connect to something in your physical infrastructure a LUN on your iSCSI or Fibre Channel SAN. What this effectively does when guest clustering is implemented is to open a hole between the physical infrastructure and the clustered virtual machines as shown in Figure This is an important issue for most hosters as they usually utilize separate networks for providing tenant virtual machine connectivity and storage infrastructure connectivity and, for security and reliability reasons, they want to keep these networks completely separate.

To the hosts, these shared virtual disks look like simple VHDX files attached to multiple virtual machines each virtual machine already has at least one other virtual hard disk for the guest operating system. To the virtual machines themselves, however, the shared virtual disks appear to be and behave as if they are virtual Serial Attached SCSI SAS disks that can be used as shared storage for a failover cluster.

Figure shows how the hosts and virtual machines view the shared VHDX files in a guest cluster. Both of these approaches allow the use of low-cost commodity storage instead of more expensive SAN solutions for the shared storage used by the guest cluster. They also allow you to deliver guest clustering using exactly the same infrastructure you use to deliver standalone virtual machines.

Using shared virtual disks Implementing guest clustering using shared virtual disks on Windows Server R2 is easy: 1. If you do this, you will have to remove the disk from the controller and reselect it in order to share it.

The key of course is that a SoFS allows you to achieve a similar level of reliability, availability, manageability, and performance as that of a SAN. And since a SoFS can use Storage Spaces, another feature of Windows Server that allows you to use low-cost commodity disks to create pools of storage from which you can provision resilient volumes, the approach can often be much more cost-effective than using a SAN. A virtual machine running on a Hyper-V host wants to access a file in a shared folder named Share2 on a two-node SoFS.

When the virtual machine attempts to connect to the share, it might connect to either Share2 on File Server 1 or to Share2 on File Server 2. Unfortunately, this is not an optimal connection because the file it wants to access is actually located on storage attached to File Server 2.

An example of such metadata would be the file system changes that occur when a virtual machine running on a Hyper-V cluster is turned off. Such metadata changes are routed over the SMB path to the coordinator node. This meant that if you had eight cluster nodes using four LUNs for shared storage, you had to manually spread the CSV disks across the cluster. Also, Failover Clustering in Windows Server had no built-in mechanism for ensuring that they stayed spread out.

In fact, all of the CSV disks could be owned by a single node. Failover Clustering in Windows Server R2, however, now includes a mechanism for fairly distributing the ownership of CSV disks across all cluster nodes based on the number of CSV disks each node owns.

Rebalancing ownership of CSV disks happens automatically whenever a new node is added to the cluster, a node is restarted, or a failover occurs. Changes to heartbeat threshold While the goal of failover clustering is to deliver high availability for server workloads, beneath the hood failover clustering is simply a health detection model. Each and every second a heartbeat connection is tested between nodes in the cluster.

If no heartbeat is heard from a node one second, nothing happens. No heartbeat for two seconds? Nothing happens. Three seconds? Four seconds? Five seconds? Five seconds is the default heartbeat threshold for all cluster roles in the Windows Server version of Failover Clustering. VM-A is currently running on HOST-1 and everything is working fine until an unexpected network interruption occurs on the network connecting the nodes.

Whatever the cause of the network interruption, Failover Clustering decides that since the heartbeat threshold had been exceeded for HOST-1, that node must be down, so it assumes that VM-A has become unavailable even though clients are still able to access the workload running on the virtual machine. Since HOST-1 has been determined to have failed, Failover Clustering begins taking remedial action on that node, which terminates any active client connections to the workload.

Meanwhile, it starts booting up VM-A on a different node so clients will be able to access the workload again. What has actually happened here, unfortunately, is that Failover Clustering could be said to have triggered a false failover. The best action in this case would be to ensure that all network cables are physically secured.

But what if your network experiences more mysterious transient failures? For one reason or another, transient network interruptions are sometimes unavoidable for some customers. Yet network interruptions that exceed the heartbeat threshold can cause a cluster to fail over when such action is neither necessary nor desired. Instead of the five-second heartbeat threshold used for all clustered roles in Windows Server , the heartbeat threshold has been increased, but only for the Hyper-V clustered role.

For Hyper-V cluster nodes on the same subnet, the threshold is now 10 seconds. And for Hyper-V cluster nodes on different subnets, the threshold is now 20 seconds. This reduction of cluster sensitivity to health problems was specifically made to enable Hyper-V clusters to provide increased resiliency to packet loss on unreliable networks.

However, increasing the heartbeat threshold like this can have the negative result of greater downtime when a real failure does happen, so administrators also have the option of configuring a lower threshold if they want to. Network disconnections can also occur at the virtual machine level on the clustered Hyper-V hosts. If this happens in Windows Server , the virtual machine continues to run on the cluster node even though the workload on the virtual machine is no longer available to clients.

To enable this capability for a virtual machine running on a Hyper-V host cluster, simply do the following: 1. CSV now supports parity spaces and the new features of Storage Spaces in Windows Server R2 such as tiered spaces and write-back caching.

Failover clusters can now be deployed without creating any computer objects in Active Directory Domain Services. A new cluster dashboard is now included in Failover Cluster Manager that lets you quickly view the health of all of your clusters. Networking Currently, IT is all about the cloud, and the foundation of cloud computing is infrastructure. Hyper-V hosts provide the compute infrastructure needed for running virtualized workloads in a cloud infrastructure. Storage Spaces and the Scale-out File Server SoFS are two storage technologies that can enable new scenarios and help lower costs when deploying the storage infrastructure for a cloud solution.

Networking is the underlying glue that holds your infrastructure together, makes possible the delivery of services, makes remote management a reality, and more. Windows Server improved this by introducing Dynamic VMQ, which dynamically distributed incoming network traffic processing to host processors, based on processor use and network load.

Receive Side Scaling Receive Side Scaling RSS allows network adapters to distribute kernel-mode network processing across multiple processor cores in multicore systems. Such distribution of processing enables support of higher network traffic loads than are possible if only a single core is used.

This has two purposes: to help ensure availability by providing traffic failover in the event of a network component failure and to enable aggregation of network bandwidth across multiple NICs. For example, by using QoS to prioritize different types of network traffic, you can ensure that mission-critical applications and services are delivered according to SLAs and to optimize user productivity.

Windows Server introduced a number of new QoS capabilities including Hyper-V QoS, which allows you to specify upper and lower bounds for network bandwidth used by a virtual machine, and new Group Policy settings to implement policy-based QoS by tagging packets with an DCB-capable network adapter hardware can be useful in cloud environments where it can enable storage, data, management, and other kinds of traffic all to be carried on the same underlying physical network in a way that guarantees each type of traffic its fair share of bandwidth.

That way, one of the DHCP servers could assume responsibility for providing addresses to all the clients on a subnet when the other DHCP server became unavailable. IPAM provided a central and integrated experience for managing IP addresses that could replace manual, work-intensive tools such as spreadsheets and custom scripts that can be tedious, unreliable, and scale poorly.

Network virtualization thus lets the cloud provider run multiple virtual networks on top of a single physical network in much the same way as server virtualization lets you run multiple virtual servers on a single physical server. Network virtualization also isolates each virtual network from every other virtual network, with the result that each virtual network has the illusion that it is a separate physical network. This means that two or more virtual networks can have the exact same addressing scheme, yet the networks will be fully isolated from one another and each will function as if it is the only network with that scheme.

BranchCache enhancements BranchCache allows organizations to increase the network responsiveness of centralized applications that are being accessed from remote offices, with the result that branch office users have an experience similar to being directly connected to the central office. BranchCache was first introduced in Windows Server R2 and was enhanced in Windows Server with improved performance and reduced bandwidth usage, default encryption of cached content, new tools that allowed you to preload cachable content onto your hosted cache servers even before the content was first requested by clients, single instance storage and downloading of duplicated content, and tighter integration with the File Server role.

There were also other networking technologies introduced or improved in Windows Server that closely relate to other infrastructure components like compute and storage. For example, there was the Hyper-V Extensible Switch, which added new virtual networking functionality to the Hyper-V server role.

There was version 3. And there was a lot more new networking features and capabilities introduced previously in Windows Server Many of the above networking features and technologies have now been improved even more in Windows Server R2. RSS in Windows Server helps you get around that by allowing kernel-mode network processing to be spread across multiple cores in a multicore server system. In Windows Server , however, virtual machines were limited to using only one virtual processor for processing network traffic.

As a result, virtual machines running on Hyper-V hosts were unable to make use of RSS to utilize the highest possible network traffic loads. To compound the problem, VMQ would affinitize all traffic destined for a virtual machine to one core inside the host for access control list ACL and vmSwitch extension processing. With Windows Server R2, however, this is no longer a limitation. This is demonstrated by Figure , which shows a virtual machine that has four virtual processors assigned to it.

The physical NIC can now spread traffic among available cores inside the host, while the virtual NIC distributes the processing load across the virtual processors inside the virtual machine.

The result of using vRSS is that it is now possible to virtualize network-intensive physical workloads that were traditionally run on bare-metal machines.

A typical usage scenario might be a Hyper-V host that has a small number or only one virtual machine running on it, but the applications running in that virtual machine generate a large amount of network traffic. This is because extra processing is required for vRSS to spread the incoming network traffic inside the host.

Figure shows how to use the Get-NetAdapterRss cmdlet to get all RSS-capable network adapters on a system that have RSS enabled and display their names, interface description, and state. This can provide two types of benefits for enterprise networks. First, it allows you to aggregate the throughput from multiple network adapters.

If this happens, the throughput drops from 2 gigabits to 1 gigabit, and while such a 50 percent drop in network traffic handling capability could affect the performance of applications running on the server, the good thing is that the server still has some connectivity with the network. Without NIC teaming, failure of a single NIC would have caused the throughput to drop from 1 gigabit to zero, which is probably much worse from a business point of view.

Before Windows Server , if you wanted to make use of NIC teaming, then you had to use third-party NIC teaming software from your network adapter vendor. With the release of Windows Server , however, NIC teaming became a built-in feature called Windows NIC Teaming that makes it possible to team together even commodity network adapters to aggregate throughput and enable fault tolerance.

Address Hash Selecting this mode causes Windows to load balance network traffic between the physical network adapters that have been teamed together. This is the default load-balancing mode when you create a new team and should be used for most servers including Hyper-V hosts.

Hyper-V Port Selecting this mode causes Windows to load balance network traffic by virtual machine. This mode has the limitation in that each virtual machine running on the host can use only a single NIC in the team for sending and receiving traffic. As a result, this mode should only be selected when the virtual machines running on the host have multiple virtual network adapters. If you use this mode when a virtual machine has only one virtual network adapter and the physical NIC being used by that adapter fails, that virtual machine will lose all connectivity with the network even though the host machine still has connectivity because of the fault tolerance of the NIC team.

Another limitation of how NIC Teaming is implemented in Windows Server is the way that the default load-balancing mode works. The hashing algorithm used by Windows NIC Teaming first creates a hash based on the contents of the network packet. Then it assigns any packets having this hash value to one of the network adapters in the team. The result is that all packets belonging to the same Transmission Control Protocol TCP flow or stream are handled by the same network adapter.

This type of load balancing can prove to be ineffective in certain scenarios. For example, say you have a virtual machine that has a single virtual network adapter running a Hyper-V host that has two network adapters configured as a team with Address Hash mode configured. The applications are currently utilizing most of the available throughput of the team. The result is that flows for any other applications that are currently being handled by that particular NIC might become starved for bandwidth, impacting the performance of the applications while the file copy completes.

IPAM enhancements Enterprise networks today tend to be a lot more complex than they were 20 or even 10 years ago.

With the advent of virtualization technologies like Hyper-V, network administrators have to deal with both physical networks and virtual networks. DNS is also needed in order to provide name resolution services so servers can have easy-to-remember friendly names like SRV-A instead of hard-to-memorize IP addresses like DHCP simplifies the task of allocating IP addresses to clients on a network and to servers by using DHCP reservations , but large organizations can have multiple DHCP servers with each server having dozens or more scopes and each scope having its own special set of Similarly, DNS simplifies management of fully-qualified domain names FQDNs for both servers and clients, but large organizations can have multiple DNS servers with each one authoritative over dozens of zones and each zone containing thousands of resource records.

How does one manage all this? Which IP addresses are actually being used at each site where my company has a physical presence? Which IP addresses have been assigned to virtual network adapters of virtual machines running on Hyper-V hosts? How can I modify a particular scope option for a certain number of scopes residing on several different DHCP servers? How can I find all scopes that have 95 percent or more of their address pool leased out to clients? How can I track all the IP addresses that have been assigned over the last 12 months to a certain server on my network?

Administrators of large enterprises often want answers like these—and want them quickly. Cloud hosting providers can have even greater difficulties keeping track of such information because their environments include both physical and virtual networks, and because of the address space reuse that often happens in multitenant cloud environments.

In the past, most large enterprises and hosters have relied on either spreadsheets, custom software developed in-house, or third-party commercial programs for keeping track of IP addressing schemes, DHCP server configurations, and DNS server configurations. Beginning with Windows Server , however, Microsoft introduced an in-box solution for performing these kinds of tasks. That in-box solution is IPAM. IPAM includes functionality for performing address space management in multiserver environments and includes monitoring and network auditing capabilities.

IPAM can automatically discover the IP address infrastructure of servers on your network and allows you to manage them from a central interface that is integrated into Server Manager.

Although IPAM in Windows Server provided enough functionality for some organizations, other organizations especially hosters wanted something more. First and perhaps most importantly, IPAM in Windows Server R2 now allows you to manage your virtual address space in addition to its physical address space. This means that enterprises that have a mixture of physical and virtual environments now have an in-box unified IP address management solution. This enhancement should be especially useful to very large enterprises and to hosters whose networks often consist of multiple datacenters in different geographical locations.

Such networks typically consist of an underlying fabric layer of infrastructure resources with a multitenant virtual network layered on top. Role-based access control provides the organization with a mechanism for specifying access to objects and capabilities in terms of the organizational structure.

Instead of assigning certain rights and permissions to a user or group of users, role-based access control allows you to assign a certain role instead. The role aligns with the kinds of tasks a certain type of user needs to be able to perform, and assigning the role to a user automatically grants that user all the necessary rights and permissions they need to do their job.

Role-based access control in IPAM now makes doing these things simple. Role-based access control in IPAM is also highly granular. Role-based access control in IPAM allows you to grant such access to the user while preventing her from having any other IP address management capabilities in your organization. Role-based access control also includes the ability to delegate administrative privileges.

This means that a senior administrator can delegate to junior administrators the ability to perform certain kinds of address management tasks in your organization.

At the top is an administrator whose computer is running Windows 8. All communications between the IPAM client and IPAM server must go through the role-based access control component, which controls what tasks the administrator is able to perform. The IPAM server performs various data-collection tasks including performing server discovery and monitoring, determining the configuration and availability of servers, performing event collection, determining address utilization, checking for expired addresses, and so on.

This means that you can now ensure that the IPAM database is highly available, back the database up more easily, perform custom queries against it using T-SQL, and so on. Such integration can simplify and speed the network discovery of the IP address inventory of your network.

Traditional enterprise networks typically had many different physical networking devices such as Ethernet switches, routers, virtual private networking VPN gateways, hardware firewalls, and other kinds of network appliances.

And of course they needed lots of wires to connect all the network hosts servers and appliances together. The modern data center network is no longer like that.

Instead of dozens of network devices and hundreds or thousands of servers, the modern data center might consist of only a dozen or so very powerful virtualization host systems, a handful of 10 GbE switches, and a couple of perimeter firewalls.

Instead of thousands of individual physical servers, you now have thousands of virtual machines running on relatively few Hyper-V hosts. One reason this change is possible is because of server virtualization and consolidation, which enables organizations to virtualize workloads that previously needed to run on physical server systems.

Your physical servers are currently using If the hoster is using Hyper-V hosts running Windows Server , your servers can keep their existing IP addresses in the This means that your existing clients, which are used to accessing physical servers located on the The ability of Hyper-V Network Virtualization to preserve your network infrastructure addressing and subnet scheme allows existing services to continue to work while being unaware of the physical location of the subnets.

The way this works is that network virtualization enables you to assign two different IP addresses to each virtual machine running on a Windows Server Hyper-V host. In the above example, this might be the In the above example, this might be The CA for each virtual machine is mapped to the PA for the underlying physical host on which the virtual machine is running.

Virtual machines communicate over the network by sending and receiving packets in the CA space. The header of this new packet has the appropriate source and destination PA, in addition to the virtual subnet ID, which is stored in the Key field of the GRE header. The virtual subnet ID in the GRE header allows hosts to identify the customer virtual machine for any given packet even though the PAs and the CAs on the packets may overlap.

In other words, Hyper-V Network Virtualization keeps track of CA-to-PA mappings to enable hosts to differentiate packets for virtual machines of different customers. The result is that Hyper-V Network Virtualization allows the hoster to run multiple customer virtual networks on top of a single underlying physical network in much the same way as server virtualization lets you run multiple virtual servers on a single physical server.

Network virtualization isolates each virtual network from every other virtual network so that each virtual network has the illusion that it is a completely separate network. Multiple customers can even use the exact same addressing scheme for their virtual networks; customer networks will be fully isolated from one another and will function as if each network is the only one present with that particular addressing scheme.

Hyper-V Network Virtualization also makes it easier for large enterprise to move server workloads between multiple data centers where overlapping addressing schemes exist between these data centers.

Hyper-V Network Virtualization thus provides increased virtual machine mobility across data centers, hosting provider clouds, and Windows Azure. Hyper-V Network Virtualization enhancements in Windows Server R2 While Windows Server provided the base functionality for implementing network virtualization, Windows Server R2 includes some new features and enhancements that not only make network virtualization easier to implement and manage but also provide customers with a more comprehensive and integrated SDN solution.

One key enhancement with Hyper-V Network Virtualization in Windows Server R2 is that it can now dynamically learn the IP addresses on the virtual machine networks. This improvement provides several new benefits such as increasing the high availability options available when deploying a network virtualization solution. Hyper-V Network Virtualization in Windows Server R2 also includes several performance enhancements over how this technology worked in Windows Server Microsoft introduced the Hyper-V Extensible Switch in Windows Server to provide new capabilities for tenant isolation, traffic shaping, protection against malicious virtual machines, and hassle-free troubleshooting.

The Hyper-V Extensible Switch was also designed to allow third parties to develop plug-in extensions to emulate the full capabilities of hardware-based switches and support more complex virtual environments and solutions.

It does this by allowing custom Network Driver Interface Specification NDIS filter drivers called extensions to be added to the driver stack of the virtual switch. This means that networking independent software vendors ISVs can create extensions that can be installed in the virtual switch to perform different actions on network packets being processed by the switch.

Capturing extensions These can capture packets to monitor network traffic but cannot modify or drop packets.

Forwarding extensions These allow you to modify packet routing and enable integration with your physical network infrastructure. Networking ISVs can use this functionality to develop applications that can perform packet inspection on a virtual network.

This meant that capturing, filtering, or forwarding extensions installed in the switch could only see the CA packets, that is, the packets that the virtual machine sees on its virtual network. However, in Windows Server R2 network virtualization functionality now resides in the Hyper-V Extensible Switch, as shown on the right in Figure This means that third-party extensions can now process network packets using either CA or PA addresses as desired.

This enables new types of scenarios, such as hybrid forwarding, whereby Hyper-V Network Virtualization forwards the network virtualization traffic while a third-party extension forwards non-network virtualization traffic. Finally, if you wanted to implement a network virtualization solution using Hyper-V in Windows Server , you needed to use third-party gateway products to do this.

The result is that Hyper-V Network Virtualization by itself in Windows Server creates virtual subnets that are separated from the rest of the network the way islands are separated from the mainland. Beginning with Windows Server R2 however, Hyper-V Network Virtualization now includes an in-box component called Windows Server Gateway WSG , a virtual machine-based software router that allows cloud hosters to route data center and cloud network traffic between virtual and physical networks including the Internet.

WSG can be used to route network traffic between physical and virtual networks at the same physical location or at multiple different physical locations. This allows organizations to implement new kinds of scenarios. For example, if your organization has both a physical network and a virtual network at the same location, you can now deploy a Hyper-V host configured with a WSG virtual machine and use it to route traffic between your virtual and physical networks.

WSG is fully integrated with Hyper-V Network Virtualization in Windows Server R2 and allows routing of network traffic even when multiple customers are running tenant networks in the same datacenter. You can also deploy WSG on a failover cluster of Hyper-V hosts to create a highly available WSG solution to provide failover protection against network outages or hardware failure.

Implementing a multitenant-aware VPN for site-to-site connectivity, for example, to allow an enterprise to span a single virtual network across multiple data centers in different geographical locations. Performing multitenant-aware network address translation NAT for Internet access from a virtual network. Providing a forwarding gateway for in-data center physical machine access from a virtual network. NUMA alleviated such bottlenecks by limiting how many CPUs could be on any one memory bus and connecting them with a high-speed interconnection.

SNI thus provided increased scalability for web servers hosting multiple SSL sites, and it helped cloud hosting providers to better conserve the dwindling resources of their pool of available IP addresses. SSL central store Before Windows Server , managing SSL certificates on servers in IIS web farms was time consuming because the certificates had to be imported into every server in the farm, which made scaling out a farm by deploying additional servers a complex task. Replicating certificates across IIS servers in a farm was further complicated by the need to manually ensure that certificate versions were in sync with each another.

IIS in Windows Server solved this problem by introducing a central store for storing SSL certificates on a file share on the network instead of in the certificate store of each host.


Windows server 2012 r2 standard download pdf ebook free download

Download Windows Server R2 RTM Free eBook PDF Version? Microsoft had released another free eBook for server R2 release. mount your Windows Server R2 media ISO image. This allows the upgrade program to download updates from Microsoft and to properly scan your server.

Leave a Reply

Your email address will not be published. Required fields are marked *