We recently did a webcast with Dell about the 10GbE iSCSI over Data Center Bridging (DCB)(Solve your data traffic queue problems with Dell EqualLogic & Emulex iSCSI over DCB Solutions) a.k.a Converged Enhanced Ethernet or Enhanced Ethernet or Data Center Ethernet, take your pick. Joining us from Dell was Sarah Cook and Gary Gumanow from Dell and Sean Murphy (yes, he spells ‘Sean’ properly, unlike myself.) from Emulex. We had about 300 folks on this webcast to talk about iSCSI. So what did they hear? Gary took us through the end-to-end solution and the technical details for what is new about 10GbE over DCB and Sean Murphy, our resident VMware product marketing manager, took us through the management and deployment tools for VMware environments.
What is iSCSI over DCB?
DCB or DCE (Cisco’s version of DCB; Data Center Ethernet) or CEE (Brocade’s version of DCB; Converged Enhanced Ethernet) improves the Ethernet fabric irrespective of what protocol (be it iSCSI, NFS, TCP or FCoE) runs over the top of it. It does this by adding lossless characteristics similar to Fibre Channel (FC). This means that iSCSI over DCB can now provide the same enterprise class services as FC at a lower cost and help drive network consolidation via convergence. To do this, DCB added four key technical capabilities to standard Ethernet to bring the best of FC and iSCSI together into one solution.
- IEEE 802.1Qbb, Priority Flow Control, is a PAUSE-based flow control mechanism that extends the legacy Ethernet PAUSE flow control, so it is not full-link based, but each one instead operates specifically on one of eight priority levels within the link. Therefore, a PAUSE on one priority flow does not stop the entire link.
- IEEE 802.1Qaz Enhanced Transmission Selection, defines a technique to allocate bandwidth to entities called “Priority Groups” which are collections of priorities (0-7) within a group. This specification also defines a link-based protocol (DCBx) that permits link local parameter negotiation.
- IEEE 802.1Qau Congestion Notification, provides a message-based, end-to-end congestion notification mechanism that can squelch senders, for example, that overrun the received capabilities of a target. Note that 802.1Qbb is link flow control, whereas 802.1Qau is end-to-end flow control that may travel over several links in a path.
- IETF TRILL, (Transparent Interconnection of Lots of Links), is not a part of the IEEE DCB specifications, but is often discussed in the same context because it provides multi-pathing capability for Ethernet Layer Two fabrics, which is a facility that is provided in FC fabrics.
During the webcast, Gary really takes you into the details of why this is valuable to your enterprise.
Management and Optimization for VMware:
Sean took us through the details of configuring and setting up ISCSI over DCB in VMware environments. Emulex is long-term partner with VMware and have developed industry-leading tools for VMware and vCenter to make deployment fast and simple. OneCommand Manager for VMware for vCenter (OCM for VMware) is a native software plug-in that integrates real-time lifecycle management of Emulex LightPulse® Host Bus Adapters (HBAs) and OneConnect™ Universal Converged Network Adapters (UCNAs) into the VMware vCenter console. This tight integration centralizes and simplifies virtualization management. OCM for VMware builds on Emulex Common Information Model (CIM) providers and established OneCommand Manager features to proactively address key data center issues and improve operational efficiency across VMware hosts and clusters. The core functionality, delivered with OCM for VMware, includes multi-protocol management (FC, FCoE, iSCSI, NIC), online firmware flashing, configuration updates, reporting options and adapter diagnostics, and flexible graphical and command line interfaces.
Sean also touched on our new Universal Multi-Channel (UMC) capability that enables our cards to present as NIC functions (IP, iSCSI, FCoE). They are presented to the operating system, or hypervisor, as a physical port with a separate MAC address and assigned bandwidth. UMC is enabled and managed at boot time.
Most servers are currently deployed with multiple 1GbE physical connections. Typically, these additional ports are used to support virtual servers and high availability, and to provide bandwidth needed for I/O-intensive applications. UMC provides a similar capability for 10GbE networking by using individually configurable partitions of the 10GbE port. With UMC, data centers can save on costs for cabling, adapters, switches and power.
Business Drivers for 10GbE
Since I was the non-techie on this call, I outlined the key market drivers and covered the top ten business drivers for 10GbE iSCSI.
If you have an extra hour, take a listen and find out about the latest in iSCSI.