Tidbits from the session include:
- Ability to define logical networks using VLANs and subnets per datacenter location. For example, you can configure separate pools for Boston, LA, DC, and London. When you deploy a VM to a location, it automatically uses the right pool and only presents to you the logical pools you can use. You can't accidentally assign a VM in Boston a London IP.
- Address management for static IPs, load balancer VIPs, and MAC addresses (both VMware MAC address range and general MAC address range). VMM uses a check-in and check-out mechanism for static IPs. No more needing spreadsheets to keep track of your IPs. Select the proper pool, and it will use the next unused address. Delete the VM? That IP goes back into the pool. 100% automated static IP assignments. Sweet! Same thing for MAC addresses and HLB VIPs.
- Automated provisioning of F5, Citrix NetScaler and Brocade (at RTM) hardware load balancers. F5 and Citrix both now have virtual LB appliances, BTW. The lack of Cisco support is a bit surprising.
- You can define HLB VIP templates that define properties such as protocol, LB method, persistence, and health monitors. You assign a HLB to a site, so when you deploy an application it automatically uses the proper physical HLB, and check-out the proper IPs.
- Storage component discovers storage arrays and pools of storage, let's you classify storage based on capabilities you dictate (throughput, availability, etc.), discover and configure LUNs and assign to Hyper-V hosts and clusters. You could have platinum storage, gold storage, silver storage, or bronze storage (or whatever names you want).
- Storage capabilities include end-to-end storage device mapping, allocation and assignment of storage, provisioning a VM using SAN array hardware copy capabilities, and storage migration of a VM (.e.g. storage vMotion).
- End-to-end mapping is truly end-to-end: service instance to VMs, to logical disks in the guest, to the guest volumes, to the physical logical disk, to the LUN, to the array disk pool, to the disk array, to the array provider. This information is fed to SCOM for event/performance correlation (very sweet).
- Uses a standards based approach for discovery, SMI-S v1.4. Many vendors are working on providers if they don't already have them.
- Supported storage types include Fibre Channel, iSCSI, and local storage. (Not sure about FCoE but I think it is supported.)
- Supports configuring iSCSI masking/unmasking, and initiator logon/logoff parameters.
- Supports Fibre Channel masking/unmasking, and NPIV vPort deletion/creation.
- This is NOT a storage management tool, so you won't use VMM to create an entirely new LUN on your storage array. You will continue to use your array's tools. Likewise for the network, this will not create VLANs in your network, but will consume them. (Although you could use Orchestrator 2012 to automate the creation in the array/switch, then have VMM discover it.)
No comments:
Post a Comment