You are previewing Storage and Network Convergence Using FCoE and iSCSI.
O'Reilly logo
Storage and Network Convergence Using FCoE and iSCSI

Book Description

Along with servers and networking infrastructure, networked storage is one of the fundamental components of a modern data center. Because storage networking has evolved over the past two decades, the industry has settled on the basic storage networking technologies. These technologies are Fibre Channel (FC) storage area networks (SANs), Internet Small Computer System Interface (iSCSI)-based Ethernet attachment, and Ethernet-based network-attached storage (NAS). Today, lossless, low-latency, high-speed FC SANs are viewed as the high-performance option for networked storage. iSCSI and NAS are viewed as lower cost, lower performance technologies.

The advent of the 100 Gbps Ethernet and Data Center Bridging (DCB) standards for lossless Ethernet give Ethernet technology many of the desirable characteristics that make FC the preferred storage networking technology. These characteristics include comparable speed, low latency, and lossless behavior. Coupled with an ongoing industry drive toward better asset utilization and lower total cost of ownership, these advances open the door for organizations to consider consolidating and converging their networked storage infrastructures with their Ethernet data networks. Fibre Channel over Ethernet (FCoE) is one approach to this convergence, but 10-Gbps-enabled iSCSI also offers compelling options for many organizations with the hope that their performance can now rival that of FC.

This IBM® Redbooks® publication is written for experienced systems, storage, and network administrators who want to integrate the IBM System Networking and Storage technology successfully into new and existing networks. This book provides an overview of today's options for storage networking convergence. It reviews the technology background for each of these options and then examines detailed scenarios for them by using IBM and IBM Business Partner convergence products.

Table of Contents

  1. Front cover
  2. Notices
    1. Trademarks
  3. Preface
    1. Authors
    2. Now you can become a published author, too!
    3. Comments welcome
    4. Stay connected to IBM Redbooks
  4. Part 1 Overview of storage and network convergence
    1. Chapter 1. Introduction to convergence
      1. 1.1 What convergence is
        1. 1.1.1 Calling it what it is
      2. 1.2 Vision of convergence in data centers
      3. 1.3 The interest in convergence now
      4. 1.4 Fibre Channel SANs today
      5. 1.5 Ethernet-based storage today
      6. 1.6 Benefits of convergence in storage and network
      7. 1.7 Challenge of convergence
      8. 1.8 Conclusion
    2. Chapter 2. Fibre Channel over Ethernet
      1. 2.1 Background: Data Center Bridging
        1. 2.1.1 Priority-based Flow Control: IEEE 802.1Qbb
        2. 2.1.2 Enhanced Transmission Selection: IEEE 802.1Qaz
        3. 2.1.3 Data Center Bridging Capabilities Exchange–IEEE 802.1Qaz
        4. 2.1.4 Congestion Notification: IEEE 802.1Qau
      2. 2.2 Standards work related to FCoE
        1. 2.2.1 Transparent Interconnection of Lots of Links
        2. 2.2.2 Shortest Path Bridging: IEEE 802.1aq
      3. 2.3 FCoE concepts
        1. 2.3.1 FCoE protocol stack
        2. 2.3.2 Topology
        3. 2.3.3 FCoE Initialization Protocol and snooping bridges
        4. 2.3.4 MAC addresses used by end devices
        5. 2.3.5 FCFs, Fabric Mode, and NPIV
        6. 2.3.6 Distributed FCF under development
      4. 2.4 Technology comparison: FCoE with iSCSI
        1. 2.4.1 Similarities
        2. 2.4.2 Differences
      5. 2.5 Summary of technology used
        1. 2.5.1 Initial cost at purchase
        2. 2.5.2 Time to deploy
        3. 2.5.3 Necessary skills
      6. 2.6 Conclusion
    3. Chapter 3. Internet Small Computer System Interface
      1. 3.1 Introduction to iSCSI
        1. 3.1.1 iSCSI overview
        2. 3.1.2 iSCSI protocol in depth
      2. 3.2 iSCSI initiators
        1. 3.2.1 Software-only solutions
        2. 3.2.2 Software with hardware assistance
        3. 3.2.3 Hardware-only solutions
      3. 3.3 Performance considerations
        1. 3.3.1 Jumbo frames
        2. 3.3.2 Prioritization and bandwidth allocation
      4. 3.4 Multipathing with iSCSI
        1. 3.4.1 IEEE 802.3ad Link Aggregation Control Protocol and Etherchannel
        2. 3.4.2 Active-Active multipathing
        3. 3.4.3 Multiconnection sessions
    4. Chapter 4. IBM products that support FCoE and iSCSI
      1. 4.1 Converged Network Adapters (CNAs)
        1. 4.1.1 IBM Flex System
        2. 4.1.2 BladeCenter
        3. 4.1.3 IBM System x and IBM Power Systems
      2. 4.2 Switches
        1. 4.2.1 Flex Chassis
        2. 4.2.2 BladeCenter
        3. 4.2.3 Top-of-Rack (ToR) / End-of-Row (EoR)
      3. 4.3 Storage systems
        1. 4.3.1 IBM SAN Volume Controller
        2. 4.3.2 IBM Storwize family
        3. 4.3.3 IBM Flex System V7000 Storage Node
        4. 4.3.4 IBM XIV Storage System
        5. 4.3.5 IBM System Storage DS3500 Express
        6. 4.3.6 IBM System Storage DCS3700
      4. 4.4 Introduction to component management
        1. 4.4.1 IBM Flex System Chassis Management Module (CMM)
        2. 4.4.2 IBM Flex System Manager (FSM)
        3. 4.4.3 IBM System Networking Switch Center
  5. Part 2 Preparing Infrastructure for storage and network convergence
    1. Chapter 5. Topologies and lab architecture
      1. 5.1 Typical topologies
        1. 5.1.1 IBM Flex System topology with IBM Flex Systems CN4093 switch
        2. 5.1.2 IBM Flex System topology with IBM Flex EN4093 switch to top-of-rack IBM System Networking GS8264CS switch
        3. 5.1.3 IBM Flex System topology with IBM Flex System EN4091 10Gb Ethernet Pass-thru Module to IBM System Networking GS8264CS switch
        4. 5.1.4 IBM BladeCenter topology with embedded FCF
        5. 5.1.5 IBM BladeCenter topology with BNT Virtual Fabric 10Gb Switch Module to top-of-rack IBM System Networking GS8264CS switch
        6. 5.1.6 IBM Blade Center topology with 10Gb Ethernet Pass-Thru Module to a top-of-rack IBM System Networking GS8264CS switch
        7. 5.1.7 IBM rack server topology connected to a top-of-rack IBM System Networking GS8264CS switch
        8. 5.1.8 IBM rack server topology with intermediate switch to an IBM System Networking GS8264CS switch
      2. 5.2 Lab architecture
        1. 5.2.1 Setup with IBM Flex Systems CN4093 switch inside IBM Flex System chassis
        2. 5.2.2 Setup with the IBM System Networking GS8264CS switch and the IBM Flex EN4093 switch inside the Flex chassis
      3. 5.3 Equipment used in the lab
      4. 5.4 Conclusion
    2. Chapter 6. Using FCoE and iSCSI in a converged network
      1. 6.1 Keeping it isolated
      2. 6.2 iSCSI and differences from FC/FCoE in a CEE world
        1. 6.2.1 Enabling CEE and iSCSI support
        2. 6.2.2 Initiator to target relationship
        3. 6.2.3 Mandatory security in real-world situations
      3. 6.3 FCoE commonalities and differences from FC in a CEE world
        1. 6.3.1 Enabling FCoE support
        2. 6.3.2 Understanding of the required fabric mode
        3. 6.3.3 Zoning
      4. 6.4 Host mapping and multipathing
      5. 6.5 Summary
    3. Chapter 7. Installing and enabling the Converged Network Adapter
      1. 7.1 Installing and enabling CN4054 10Gb Virtual Fabric Adapter on IBM Flex System
        1. 7.1.1 Updating the firmware
        2. 7.1.2 Checking and enabling FCoE settings
      2. 7.2 Installing and enabling the Emulex CNA
        1. 7.2.1 Loading the default settings on the Emulex CNA
      3. 7.3 Installing and enabling the Emulex 10GB Virtual Fabric Adapters I and II for iSCSI
        1. 7.3.1 Updating firmware
        2. 7.3.2 Installing a driver in a Windows environment
        3. 7.3.3 Installing the iSCSI driver in a VMware environment
        4. 7.3.4 Installing OneCommand Manager in a Linux environment
      4. 7.4 Installing the CNA software management tools
        1. 7.4.1 Installing OneCommand Manager in Windows
        2. 7.4.2 Changing the personality of Emulex Virtual Fabric Adapter II
        3. 7.4.3 Configuring NIC teaming for the Emulex Virtual Fabric Adapter II
        4. 7.4.4 Installing the Emulex management application in VMware
      5. 7.5 Installing and enabling the QLogic 2-port 10Gb Converged Network Adapter
        1. 7.5.1 Updating the firmware
        2. 7.5.2 Installing drivers
        3. 7.5.3 Installing the management software
        4. 7.5.4 Setting the adapter for iSCSI
        5. 7.5.5 Setting the adapter for FCoE
        6. 7.5.6 Configuring the VLAN on the network adapter
        7. 7.5.7 Configuring network teaming and VLANs
      6. 7.6 Installing and enabling the Brocade 2-port 10GbE Converged Network Adapter
        1. 7.6.1 Installing the drivers and management software
        2. 7.6.2 Updating the firmware
        3. 7.6.3 Setting the adapter for iSCSI
        4. 7.6.4 Setting the adapter for FCoE
        5. 7.6.5 Configuring VLAN
        6. 7.6.6 Configuring network teaming and VLANs on the team
      7. 7.7 iSCSI connectors
        1. 7.7.1 Hardware iSCSI initiators
        2. 7.7.2 Software iSCSI initiators
    4. Chapter 8. FC and FCoE zone configuration
      1. 8.1 Why zoning is important
      2. 8.2 Zoning on the IBM Flex System
        1. 8.2.1 Creating FCoE zoning with the GUI
        2. 8.2.2 Creating FCoE zoning with the CLI
      3. 8.3 Brocade zoning
      4. 8.4 Cisco zoning
      5. 8.5 QLogic zoning
      6. 8.6 Conclusion
  6. Part 3 Implementing storage and network convergence
    1. Chapter 9. Configuring iSCSI and FCoE cards for SAN boot
      1. 9.1 Preparing to set up a boot from SAN environment on a UEFI system
        1. 9.1.1 Scenario environment
        2. 9.1.2 Before you start
      2. 9.2 Optimizing UEFI for boot from SAN
        1. 9.2.1 Loading the UEFI default settings
        2. 9.2.2 Optional: Disabling the onboard SAS controller
        3. 9.2.3 Optional: Setting the CNA card as the first boot device in UEFI
        4. 9.2.4 Next steps
      3. 9.3 Configuring IBM Flex System CN4054 for iSCSI
        1. 9.3.1 Configuring IBM Flex System CN4054 for boot from SAN
        2. 9.3.2 Configuring the IBM Flex System CN4054
        3. 9.3.3 Loading the default settings on the IBM Flex System CN4054
        4. 9.3.4 Configuring the IBM Flex System CN4054 settings
        5. 9.3.5 Booting from SAN variations
        6. 9.3.6 Troubleshooting
      4. 9.4 Configuring IBM Flex System CN4054 for FCoE
        1. 9.4.1 Configuring an IBM Flex System CN4054 for boot from SAN
        2. 9.4.2 Configuring the IBM Flex System CN4054
        3. 9.4.3 Loading the default settings on the IBM Flex System CN4054
        4. 9.4.4 Configuring the IBM Flex System CN4054 settings
        5. 9.4.5 Booting from SAN variations
        6. 9.4.6 Installing Windows 2012 in UEFI mode
        7. 9.4.7 Booting the Windows DVD in UEFI mode
        8. 9.4.8 Installing SuSE Linux Enterprise Server 11 Servicepack 2
        9. 9.4.9 Booting the SLES 11 SP 2 DVD in UEFI mode
        10. 9.4.10 Installing Windows 2012 in legacy mode
        11. 9.4.11 Optimizing the boot for legacy operating systems
        12. 9.4.12 Windows installation sequence
        13. 9.4.13 Troubleshooting
      5. 9.5 Configuring Emulex for iSCSI for the BladeCenter
        1. 9.5.1 Configuring Emulex card for boot from SAN
        2. 9.5.2 Configuring the Emulex CNA
        3. 9.5.3 Loading the default settings on the Emulex CNA
        4. 9.5.4 Configuring the Emulex settings
        5. 9.5.5 Booting from SAN variations
        6. 9.5.6 Installing Windows 2008 x64 or Windows 2008 R2 (x64) in UEFI mode
        7. 9.5.7 Booting the Windows DVD in UEFI mode
        8. 9.5.8 Installing Windows 2008 x86 in legacy mode
        9. 9.5.9 Optimizing the boot for legacy operating systems
        10. 9.5.10 Troubleshooting
      6. 9.6 Configuring Emulex for FCoE in the BladeCenter
        1. 9.6.1 Configuring an Emulex card for boot from SAN
        2. 9.6.2 Configuring the Emulex CNA
        3. 9.6.3 Loading the default settings on the Emulex CNA
        4. 9.6.4 Configuring the Emulex settings
        5. 9.6.5 Booting from SAN variations
        6. 9.6.6 Installing Windows 2008 x64 or Windows 2008 R2 (x64) in UEFI mode
        7. 9.6.7 Booting the Windows DVD in UEFI mode
        8. 9.6.8 Installing Windows 2008 x86 in legacy mode
        9. 9.6.9 Optimizing the boot for legacy operating systems
        10. 9.6.10 Troubleshooting
      7. 9.7 Configuring QLogic for FCoE in the BladeCenter
        1. 9.7.1 Configuring the QLogic card for boot from SAN
        2. 9.7.2 Configuring the QLogic CNA
        3. 9.7.3 Adding a boot device
        4. 9.7.4 Booting from SAN variations
        5. 9.7.5 Installing Windows 2008 x64 or Windows 2008 R2 (x64) in UEFI mode
        6. 9.7.6 Booting the Windows DVD in UEFI mode
        7. 9.7.7 Installing Windows 2008 x86 in legacy mode
        8. 9.7.8 Optimizing the boot for legacy operating systems
        9. 9.7.9 Troubleshooting
      8. 9.8 Configuring Brocade for FCoE in the BladeCenter
        1. 9.8.1 Configuring the Brocade card for boot from SAN
        2. 9.8.2 Configuring the Brocade CNA
        3. 9.8.3 Booting from SAN variations
        4. 9.8.4 Installing Windows 2008 x64 or Windows 2008 R2 (x64) in UEFI mode
        5. 9.8.5 Booting the Windows DVD in UEFI mode
        6. 9.8.6 Installing Windows 2008 x86 in legacy mode
        7. 9.8.7 Optimizing the boot for legacy operating systems
        8. 9.8.8 Boot from SAN by using the First LUN option
        9. 9.8.9 Installing Windows in legacy BIOS mode
        10. 9.8.10 Troubleshooting: Hardware does not support boot to disk
      9. 9.9 After the operating system is installed
        1. 9.9.1 Installing the disk storage redundant driver on the blade
        2. 9.9.2 Zoning other CNA ports on the switches
        3. 9.9.3 Mapping the LUN to the other CNA port on the SAN disk subsystem
        4. 9.9.4 Optional: Verifying connectivity on server with CNA management tools
      10. 9.10 Common symptoms and tips
      11. 9.11 References about boot from SAN
      12. 9.12 Summary
    2. Chapter 10. Approach with FCoE inside the BladeCenter
      1. 10.1 Implementing IBM BladeCenter enabled for FCoE with Virtual Fabric Switch and Virtual Extension Module
        1. 10.1.1 Defining the FCoE and FC fabric topology
        2. 10.1.2 Configuring the BNT Virtual Fabric 10Gb Switch Modules
        3. 10.1.3 Configuring the QLogic Virtual Extension Modules
        4. 10.1.4 Switching the Virtual Fabric Extension Module to N-Port Virtualization mode if connected to an existing FC fabric
        5. 10.1.5 Configuring the FCoE VLAN ID on the CNA
        6. 10.1.6 Configuring FCoE for the IBM Virtual Fabric Adapter in a virtual network interface card
        7. 10.1.7 Summary assessment
      2. 10.2 Enabling FCoE host access by using the Brocade Converged 10G Switch Module solution
        1. 10.2.1 Configuring the Brocade Converged 10G Switch Module
        2. 10.2.2 Summary assessment
    3. Chapter 11. Approach with FCoE between BladeCenter and a top-of-rack switch
      1. 11.1 Overview of testing scenarios
      2. 11.2 BNT Virtual Fabric 10Gb Switch Module utilizing the Nexus 5010 Fast Connection Failover
        1. 11.2.1 BNT Virtual Fabric 10Gb Switch Module configuration
        2. 11.2.2 BNT Virtual Fabric 10Gb Switch Module configuration with vNIC
        3. 11.2.3 Nexus 5010 configuration
      3. 11.3 Cisco Nexus 4001i embedded switch with Nexus 5010 FCF
        1. 11.3.1 Nexus 4001i configuration
        2. 11.3.2 Nexus 5010 switch configuration
      4. 11.4 Commands and pointers for FCoE
        1. 11.4.1 Nexus 4001i Switch Module
      5. 11.5 Full switch configurations
        1. 11.5.1 BNT Virtual Fabric 10Gb Switch Module configuration in pNIC mode
        2. 11.5.2 BNT Virtual Fabric 10Gb Switch Module configuration in vNIC mode
        3. 11.5.3 Nexus 5010 switch configuration
        4. 11.5.4 Nexus 4001i configuration
    4. Chapter 12. Approach with FCoE inside the Flex Chassis
      1. 12.1 Implementing IBM Flex System Enterprise Chassis enabled for FCoE with IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
        1. 12.1.1 Overview of testing scenarios
      2. 12.2 Configuring the IBM Flex System Fabric CN4093
      3. 12.3 Commands and pointers for FCoE
        1. 12.3.1 Configuring the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch in a pNIC/vNIC and Full Fabric mode
        2. 12.3.2 Configuring the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch in a pNIC/vNIC and NPV mode
      4. 12.4 Full switch configurations
        1. 12.4.1 BNT Virtual Fabric 10Gb Switch Module for IBM BladeCenter
        2. 12.4.2 IBM Flex System Fabric CN4093 in pNIC and Full Fabric mode
        3. 12.4.3 IBM Flex System Fabric CN4093 in vNIC and Full Fabric mode
        4. 12.4.4 IBM Flex System Fabric CN4093 in pNIC and NPV mode
        5. 12.4.5 IBM Flex System Fabric CN4093 in vNIC and NPV mode
      5. 12.5 Summary assessment
    5. Chapter 13. Approach with FCoE between the IBM Flex Chassis and a top-of-rack switch
      1. 13.1 Overview of testing scenarios
        1. 13.1.1 Scenario with the IBM System Networking GS8264CS switch in FCF mode
        2. 13.1.2 Scenario with the IBM System Networking GS8264CS switch in NPV mode
      2. 13.2 IBM System Networking GS8264CS switch
        1. 13.2.1 IBM System Networking GS8264CS switch configuration FCF mode
        2. 13.2.2 IBM System Networking GS8264CS switch configuration NPV mode
        3. 13.2.3 IBM EN4093 configuration with pNIC
        4. 13.2.4 IBM EN4093 configuration with vNIC
      3. 13.3 Commands and pointers for FCoE
        1. 13.3.1 IBM System Networking GS8264CS switch commands for FCF mode
        2. 13.3.2 IBM System Networking GS8264CS switch commands for NPV mode
        3. 13.3.3 IBM Flex System EN4093 switch commands for pNIC mode
        4. 13.3.4 IBM Flex System EN4093 switch commands for vNIC mode
      4. 13.4 Full switch configurations
        1. 13.4.1 GS8264CS FCF configuration
        2. 13.4.2 GS8264CS NPV configuration
        3. 13.4.3 IBM Flex System EN4093 switch configuration in pNIC mode
        4. 13.4.4 IBM Flex System EN4093 switch configuration in vNIC mode
        5. 13.4.5 BNT Virtual Fabric 10Gb Switch Module configuration in vNIC mode
      5. 13.5 Summary assessment
    6. Chapter 14. Approach with iSCSI
      1. 14.1 iSCSI implementation
        1. 14.1.1 Testing results
        2. 14.1.2 Configuration details for vNIC mode
        3. 14.1.3 Configuration details for pNIC mode
        4. 14.1.4 Methods of sharing bandwidth
      2. 14.2 Initiator and target configuration
        1. 14.2.1 Emulex Virtual Fabric Adapters I and II
        2. 14.2.2 Microsoft iSCSI software initiator
        3. 14.2.3 VMware software initiator
        4. 14.2.4 Storage as iSCSI target
      3. 14.3 Summary
  7. Appendix A. Solution comparison and test results
    1. Solution comparison
    2. Performance test results
    3. Network test
    4. Comparing the CNAs with FCoE
    5. Comparing iSCSI, FCOE, and FC
    6. Comparing iSCSI Windows and VMware software and hardware
    7. Comparing the Emulex CNA on different switches
    8. More real-life testing
    9. Summary of results
  8. Related publications
    1. IBM Redbooks
    2. Other publications
    3. Online resources
    4. Help from IBM
  9. Back cover