The Dell blade server products are built around their M1000e enclosure that can hold their server blades, an embedded EqualLogic iSCSI storage area network and I/O modules including Ethernet, Fibre Channel and InfiniBand switches.
- 1 Enclosure
- 2 Available server-blades
- 3 Mezzanine cards
- 4 Blade storage
- 5 PowerConnect switches
- 6 Force10 switches
- 7 Cisco switches
- 8 Other I/O cards
- 9 Managing enclosure
- 10 Power and cooling
- 11 References
The M1000e fits in a 19-inch rack and is 10 rack units high (44 cm), 17,6" wide (44,7 cm) and 75,4 cm deep. The empty blade enclosure weighs 44,5 kg while a fully loaded system can weigh up to 178,8 kg.
On the front the servers are inserted while at the backside the power-supplies, fans and I/O modules are inserted together with the management modules(s) (CMC or chassis management controller) and the KVM switch. A blade enclosure offers centralized management for the servers and I/O systems of the blade-system. Most servers used in the blade-system offer an iDRAC card and you can connect to each servers iDRAC via the M1000e management system. It is also possible to connect a virtual KVM switch to have access to the main-console of each installed server.
In June 2013 Dell introduced the PowerEdge VRTX, which is a smaller blade system that shares modules with the M1000e. The blade servers, although following the traditional naming strategy e.g. M520, M620 (only blades supported) are not interchangeable between the VRTX and the M1000e. The blades differ in firmware and mezzanine connectors.
The M1000e enclosure is an enclosure with a front-side and a back-side and thus all communication between the inserted blades and modules goes via the midplane, which has the same function as a backplane but has connectors at both sides where the front side is dedicated for server-blades and the back for I/O modules.
The midplane is completely passive. The server-blades are inserted in the front side of the enclosure while all other components can be reached via the back.
The original midplane 1.0 capabilities are Fabric A - Ethernet 1Gb; Fabrics B&C - Ethernet 1Gb, 10Gb, 40Gb - Fibre Channel 4Gb, 8Gb - IfiniBand DDR, QDR, FDR10. The enhanced midplane 1.1 capabilities are Fabric A - Ethernet 1Gb, 10Gb; Fabrics B&C - Ethernet 1Gb, 10Gb, 40Gb - Fibre Channel 4Gb, 8Gb, 16Gb - IfiniBand DDR, QDR, FDR10, FDR. The original M1000e enclosures came with midplane version 1.0 but that midplane did not support the 10GBASE-KR standard on fabric A (10GBASE-KR standard is supported on fabrics B&C). To have 10Gb Ethernet on fabric A or 16Gb Fibre Channel or InfiniBand FDR (and faster) on fabrics B&C, you will need midplane 1.1. Current versions of the enclosure come with midplane 1.1 and it is possible to upgrade your midplane. Via the markings on the back-side of the enclosure, just above the I/O modules: if you see an "arrow down" above the 6 I/O slots the 1.0 midplane was installed in the factory; if you see 3 or 4 horizontal bars midplane 1.1 was installed. As it is possible to upgrade the midplane the outside markings are not decisive: via the CMC management interface you can also check the actual installed version of the midplane
Each M1000e enclosure can hold up to 32 quarter-height, 16 half-height blades or 8 full-height or combinations (e.g. 1 full-height + 14 half-height). The slots are numbered 1-16 where 1-8 are the upper blades and 9-16 are directly beneath 1-8. When using full-height blades you use slot n (where n=1 to 8) and slot n+8 Integrated at the bottom of the front-side is a connection-option for 2 x USB, meant for a mouse and keyboard, as well as a standard VGA monitor connection (15 pin). Next to this is a power-button with power-indication.
Next to this is a small LCD screen with navigation buttons which allows you to get system-information without the need to access the CMC/management system of the enclosure. Basic status and configuration information is available via this display. To operate the display you can pull it towards you and tilt it for optimal view and access to the navigation button. For quick status checks, an indicator light sits alongside the LCD display and is always visible, with a blue LED indicating normal operation and an orange LED indicating a problem of some kind.
This LCD display can also be used for the initial configuration wizard in a newly delivered (unconfigured) system, allowing the operator to configure the CMC IP address.
Back:power, management and I/O
All other parts and modules are placed at the rear of the M1000e. The rear-side is divided in 3 sections: top: here you insert the 3 management-modules: one or two CMC modules and an optional iKVM module. At the bottom of the enclosure there are 6 bays for power-supply units. A standard M1000e operates with three PSU's The area in between offers 3 x 3 bays for cooling-fans (left - middle - right) and up to 6 I/O modules: three modules to the left of the middle fans and three to the right. The I/O modules on the left are the I/O modules numbered A1, B1 and C1 while the right hand side has places for A2, B2 and C2. The A fabric I/O modules connect to the on-board I/O controllers which in most cases will be a dual 1Gb or 10Gb Ethernet NIC. When the blade has a dual port on-board 1Gb NIC the first NIC will connect to the I/O module in fabric A1 and the 2nd NIC will connect to fabric A2 (and the blade-slot corresponds with the internal Ethernet interface: e.g. the on-board NIC in slot 5 will connect to interface 5 of fabric A1 and the 2nd on-board NIC goes to interface 5 of fabric A2)
I/O modules in fabric B1/B2 will connect to the (optional) Mezzanine card B or 2 in the server and fabric C to Mezzanine C or 3.
An M1000e holds up to 32 quarter-height, 16 half-height blades or 8 full-height blades or a mix of them (e.g. 2 full height + 12 half-height). The 1/4 height blades require a full-size sleeve to install. The current list are the currently available 11G blades and the latest generation 12 models. There are also older blades like the M605, M805 and M905 series.
Power Edge M420
The new PE M420 is a "quarter size" blade: where most servers are 'half-size', allowing 16 blades per M1000e enclosure the new M420 is quarter-size: up to 32 blade servers can be installed in a single chassis. Implementing the M420 has some consequences for the system: many people have reserveed 16 IP addresses per chassis to support the "automatic IP address assignment" for the iDRAC management card in a blade, but as it is now possible to run 32 blades per chassis people might need to change their management IP assignment for the iDRAC. To support the M420 server you need to run CMC firmware 4.1 or later and you need a full-size "sleeve" that holds up to 4 M420 blades. It also has consequences for the "normal" I/O NIC assignment: most (1/2 size) blades have 2 LOM's (LAN On Motherboard): one connecting to the switch in the A1 fabric, the other to the A2 fabric. And the same applies to the Mezzanine cards B and C. All available I/O modules (except for the PCM6348) have 16 internal ports: one for each half-size blade. As an M420 has two 10Gb LOM NICs a fully loaded chassis would require 2 x 32 internal switch ports for LOM and the same for Mezzanine. An M420 server only supports a single Mezzanine card (Mezzanine B) where all half-height and full-height systems support two Mezzanine cards. To support all on-board NICs you would need to deploy a 32 slot Ethernet switch such as the MXL or Force10 I/O Aggregator. But for the Mezzanine card it is different: the connections from Mezzanine B on the PE M420 are "load-balanced" between the B and C-fabric of the M1000e: the Mezzanine card in "slot A" (top slot in the sleeve) connect to Fabric C while "slot B" (the 2nd slot from the top) connects to fabric B. And that is then repeated for C and D slots in the sleeve
Power Edge M520
A half-height server with 2 - 8 core Intel Xeon E5-2400 cpu, running the Intel C600 chipset and offering up to 384 Gb RAM memory via 12 DIMM slots. Two on-blade disks (2,5" PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O. The M520 can also be used in the PowerEdge VRTX system.
Power Edge M620
A half-height server with up to 2 - 12 core Intel Xeon E5-2600 v2 CPUs, running the Intel C600 chipset and offering up to 768 GB RAM memory via 24 DIMM slots. Two on-blade disks (2,5" PCIe SSD, SATA HDD or SAS HDD) are installable for local storage with a range of RAID controller options. Two external and one internal USB ports and two SD card slots. The blades can come pre-installed with Windows 2008 R2 SP1, Windows 2012 R2, SuSE Linux Enterprise or RHEL. It can also be ordered with Citrix XenServer or VMWare vSphere ESXi or using Hyper-V which comes with W2K8 R2. According to the vendor all Generation 12 servers are optimized to run as virtualisation platform. Out-of-band management is done via iDRAC 7 via the CMC.
Power Edge M630
A half-height server with up to 2 - 22 core Intel Xeon E5-2600 v3/v4 CPUs, running the Intel C610 chipset and offering up to 768 GB RAM memory via 24 DIMM slots, or 640 GB RAM memory via 20 DIMM slots when using 145w CPUs. Two on-blade disks (2,5" PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O. The M630 can also be used in the PowerEdge VRTX system.
Power Edge M820
A full-height server with 4 - 8 core Intel Xeon E5-4600 CPU, running the Intel C600 chipset and offering up to 1,5 TB RAM memory via 48 DIMM slots. Up to four on-blade 2,5" SAS HDD/SSD or two PCIe flash SSD are installable for local storage. The M820 offers a choice of 3 different on-board converged Ethernet adaptors for 10 Gbit/s Fibre Channel over Ethernet (FCoE) from Broadcom, Brocade or QLogic and up to two additional Mezzanine for Ethernet, Fibre Channel or InfiniBand I/O
Power Edge M610
A half-height server with a quad-core or six-core Intel 5500 or 5600 Xeon CPU and Intel 5520 chipset. RAM memory options via 12 DIMM slots for up to 192 Gb RAM DDR3. A maximum of two on-blade hot-pluggable 2.5" hard-disks or SSDs and a choice of built-in NICs for Ethernet or converged network adapter (CNA), Fibre Channel or InfiniBand. The server has the Intel 5520 chipset and a Matrox G200 video card
Power Edge M610x
A full-height blade server that has the same capabilities as the half-height M610 but offering an expansion module containing x16 PCI Express (PCIe) 2.0 expansion slots that can support up to two standard full-length/full-height PCIe cards.
Power Edge M710
A full-height server with a quad-core or six-core Intel 5500 or 5600 Xeon CPU and up to 192 Gb RAM. A maximum of four on-blade hot-pluggable 2.5" hard-disks or SSD's and a choice of built-in NICs for Ethernet or converged network adapter, Fibre Channel or InfiniBand. The video card is a Matrox G200.The server has the Intel 5520 chipset
Power Edge M710HD
A two-socket version of the M710 but now in a half-height blade. CPU can be two quad-core or 6-core Xeon 5500 ot 5600 with the Intel 5520 chipset. Via 18 DIMM slots up to 288 Gb DDR3 RAM can put on this blade and the standard choice of on-board Ethernet NICs based on Broadcom or Intel and one or two Mezzanine cards for Ethernet, Fibre Channel or InfiniBand.
Power Edge M910
A full-height server of the 11th generation with up to 4 x 10-core Intel XEON E7 CPU or 4 x 8 core XEON 7500 series or 2 x 8 core XEON 6500 series, 512 Gb or 1Tb DDR3 RAM and two hot-swappable 2,5" hard-drives (spinning or SSD). It uses the Intel E 7510 chipset. A choice of built-in NICs for Ethernet, Fibre Channel or InfiniBand
Power Edge M915
Also a full-height 11G server using the AMD Opteron 6100 or 6200 series CPU with the AMD SR5670 and SP5100 chipset. Memory via 32 DDR3 DIMM slots offering up to 512Gb RAM. On-board up to two 2,5 inch HDD or SSD's. The blade comes with a choice of on-board NICs and up to two mezzanine cards for dual-port 10Gb Ethernet, dual-port FCoE, dual-port 8Gb fibre-channel or dual port Mellanox Infiniband. Video is via the on-board Matrox G200eW with 8MB memory
Each server comes with Ethernet NICs on the motherboard. These 'on board' NICs connect to a switch or pass-through module inserted in the A1 or the A2 bay at the back of the switch. To allow more NICs or non-Ethernet I/O each blade has two so-called mezzanine slots: slot B connecting to the switches/modules in bay B1 and B2 and slot C connecting to C1/C2: An M1000e chassis holds up to 6 switches or pass-through modules. For redundancy one would normally install switches in pairs: the switch in bay A2 is normally the same as the A1 switch and connects the blades on-motherboard NICs to connect to the data or storage network.
(Converged) Ethernet Mezzanine cards
Standard blade-servers have one or more built-in NICs that connect to the 'default' switch-slot (the A-fabric) in the enclosure (often blade-servers also offer one or more external NIC interfaces at the front of the blade) but if you want the server to have more physical (internal) interfaces or connect to different switch-blades in the enclosure you can place extra mezzanine cards on the blade. The same applies to adding a Fibre Channel host bus adapter or a Fibre Channel over Ethernet (FCoE) converged network adapter interface. Dell offers the following (converged) Ethernet mezzanine cards for their PowerEdge blades:
- Broadcom 57712 dual-port CNA
- Brocade BR1741M-k CNA
- Mellanox ConnectX-2 dual 10Gb card
- Intel dual port 10Gb Ethernet
- Intel Quad port Gigabit Ethernet
- Intel Quad port Gigabit Ethernet with virtualisation technology and iSCSI acceleration features
- Broadcom NetXtreme II 5709 dual- and quad-port Gigabit Ethernet (dual port with iSCSI offloading features)
- Broadcom NetXtreme II 5711 dual port 10Gb Ethernet with iSCSI offloading features
Apart from the above the following mezzanine cards are available:
- Emulex LightPulse LPe1105-M4 Host adapter
- Mellanox ConnectX IB MDI Dual-Port InfiniBand Mezzanine Card
- QLogic SANblade HBA
- SANsurfer Pro
In most setups the server-blades will use external storage (NAS using iSCSI, FCoE or Fibre Channel) in combination with local server-storage on each blade via hard disk drives or SSDs on the blades (or even only a SD-card with boot-OS like VMware ESX). It is also possible to use completely diskless blades that boot via PXE or external storage. But regardless of the local and boot-storage: the majority of the data used by blades will be stored on SAN or NAS external from the blade-enclosure.
Dell has put the EqualLogic PS M4110 models of iSCSI storage arrays that are physically installed in the M1000e chassis: this SAN will take the same space in the enclosure as two half-height blades next to each other. Apart from the form factor (the physical size, getting power from the enclosure system etc.) it is a "normal" iSCSI SAN: the blades in the (same) chassis communicate via Ethernet and the system does require an accepted Ethernet blade-switch in the back (or a pass-through module + rack-switch): there is no option for direct communication of the server-blades in the chassis and the M4110: it only allows a user to pack a complete mini-datacentre in a single enclosure (19" rack, 10 RU)
Depending on the model and used disk driver the PS M4110 offers a system (raw) storage capacity between 4.5 TB (M4110XV with 14 x 146 Gb, 15K SAS HDD) and 14 TB (M4110E with 14 x 1TB, 7,2K SAS HDD). The M4110XS offer 7.4TB using 9 HDD's and 5 SSD's.
Each M4110 comes with one or two controllers and two 10 Gigabit Ethernet interfaces for iSCSI. The management of the SAN goes via the chassis-management interface (CMC). Because the iSCSI uses 10Gb interfaces the SAN should be used in combination with one of the 10G blade switches: the PCM 8024-k or the Force10 MXL switch. The enclosure's mid-plane hardware version should be at least version 1.1 to support 10Gb KR connectivity
At the rear side of the enclosure you will find the power-supplies, fan-trays, one or two chassis-management modules (the CMC's) and a virtual KVM switch. And the rear offers 6 bays for I/O modules numbered in 3 pairs: A1/A2, B1/B2 and C1/C2. The A bays connect the on-motherboard NICs to external systems (and/or allowing communication between the different blades within one enclosure).
The Dell PowerConnect switches are modular switches for use in the Dell blade server enclosure M1000e. The M6220, M6348, M8024 and M8024K are all switches in the same family, based on the same fabrics (Broadcom) and running the same firmware-version.
The most important difference between the M-series switches and the Dell PowerConnect classic switches (e.g. the 8024 model) is the fact that most interfaces are internal interfaces that connect to the blade-servers via the midplane of the enclosure. Also the M-series can't be running outside the enclosure: it will only work when inserted in the enclosure.
This is a 20 port switch: 16 internal and 4 external Gigabit Ethernet interfaces and the option to extend it with up to four 10Gb external interfaces for uplinks or two 10Gb uplinks and two stacking ports to stack several PCM6220's into one large logical switch.
This is a 48 port switch: 32 internal 1Gb interfaces (two per serverblade) and 16 external copper (RJ45) gigabit interfaces. There are also two SFP+ slots for 10Gb uplinks and two CX4 slots that can either be used for two extra 10Gb uplinks or to stack several M6348's blades in one logical switch. The M6348 offers four 1Gb interfaces to each blade which means that you can only utilize the switch to full capacity when using blades that offer 4 internal NICs on the A fabric (=the internal/on motherboard NIC). The M6348 can be stacked with other M6348 but also with the PCT7000 series rack-switches.
PowerConnect M8024 and M8024k
The M8024 and M8024-k offer 16 internal autosensing 1 or 10Gb interfaces and up to 8 external ports via one or two I/O modules each of which can offer: 4 x 10Gb SFP+ slots, 3 x CX4 10Gb (only) copper or 2 x 10G BaseT 1/10 Gb RJ-45 interfaces. The PCM8024 is 'end of sales' since November 2011 and replaced with the PCM8024-k. Since firmware update 4.2 the PCM8024-k supports partially FCoE via FIP (FCoE Initialisation Protocol) and thus Converged network adapters but unlike the PCM8428-k it has no native fibre channel interfaces.
Also since firmware 4.2 the PCM8024-k can be stacked using external 10Gb Ethernet interfaces by assigning them as stacking ports. Although this new stacking-option is also introduced in the same firmware release for the PCT8024 and PCT8024-f you can't stack blade (PCM) and rack (PCT)-versions in a single stack. The new features are not available on the 'original' PCM8024. Firmware 4.2.x for the PCM8024 only corrected bugs: no new features or new functionality are added to 'end of sale' models.
All PowerConnect M-series (=PCM) switches are multi-layer switches thus offering both layer 2 (Ethernet) options as well as layer 3 or IP routing options.
Depending on the model the switches offer internally 1Gbit/s or 10Gbit/s interfaces towards the blades in the chassis. The PowerConnect M series with "-k" in the model-name offer 10Gb internal connections using the 10GBASE-KR standard. The external interfaces are mainly meant to be used as uplinks or stacking-interfaces but can also be used to connect non-blade servers to the network.
On the link-level PCM switches support link aggregation: bot static LAG's as well as LACP. As all PowerConnect switches the switches are running RSTP as Spanning Tree Protocol, but it is also possible to run MSTP or Multiple Spanning Tree. The internal ports towards the blades are by default set as edge or "portfast" ports. Another feature is to use link-dependency. You can, for example, configure the switch that all internal ports to the blades are shut down when the switch gets isolated because it loses its uplink to the rest of the network.
All PCM switches can be configured as pure layer-2 switches or they can be configured to do all routing: both routing between the configured VLAN's as external routing. Besides static routes the switches also support OSPF and RIP routing. When using the switch as routing switch you need to configure vlan interfaces and assign an IP address to that vlan interface: it is not possible to assign an IP address directly to a physical interface.
All PowerConnect blade switches, except for the original PC-M8024, can be stacked. To stack the new PC-M8024-k switch the switches need to run firmware version 4.2 or higher. In principle you can only stack switches of the same family; thus stacking multiple PCM6220's together or several PCM8024-k. The only exception is the capability to stack the blade PCM6348 together with the rack-switch PCT7024 or PCT7048. Stacks can contain multiple switches within one M1000e chassis but you can also stack switches from different chassis to form one logical switch.
MXL 10/40 Gb switch
At the Dell Interop 2012 in Las Vegas Dell announced the first FTOS based blade-switch: the Force10 MXL 10/40Gpbs blade switch, and later a 10/40Gbit/s concentrator. The FTOS MXL 40 Gb was introduced on 19 July 2012. The MXL provides 32 internal 10Gbit/s links (2 ports per blade in the chassis), two QSFP+ 40Gbit/s ports and two empty expansion slots allowing a maximum of 4 additional QSFP+ 40Gbit/s ports or 8 10Gbit/s ports. Each QFSP+ port can be used for a 40Gbit/s switch to switch (stack) uplink or, with a break-out cable, 4 x 10Gbit/s links. Dell offers direct attach cables with on one side the QSFP+ interface and 4 x SFP+ on the other end or a QSFP+ transceiver on one end and 4 fibre-optic pairs to be connected to SFP+ transceivers on the other side. Up to six MXL blade-switch can be stacked into one logical switch.
The MXL switches also support Fibre Channel over Ethernet so that server-blades with a converged network adapter Mezzanine card can be used for both data as storage using a Fibre Channel storage system. The MXL 10/40 Gbit/s blade switch will run FTOS and because of this will be the first M1000e I/O product without a Web graphical user interface.
In October 2012 Dell also launched the I/O Aggregator for the M1000e chassis running on FTOS. The I/O Aggregator offers 32 internal 10Gb ports towards the blades and standard two 40Gbit/s QSFP+ uplinks and offers two extension slots. Depending on your requirements you can get extension modules for 40Gb QSFP+ ports, 10 Gb SFP+ or 1-10 GBaseT copper interfaces. You can assign up to 16 x 10Gb uplinks to your distribution or core layer. The I/O aggregator supports FCoE and DCB (Data center bridging) features
Dell also offers some Cisco Catalyst switches for this blade enclosure. Cisco offers a range of switches for blade-systems from the main vendors. Besides the Dell M1000e enclosure Cisco offers similar switches also for HP, FSC and IBM blade-enclosures.
For the Dell M1000e there are two model-ranges for Ethernet switching: (note: Cisco also offers the Catalyst 3030, but this switch is for the old Generation 8 or Gen 9 blade system, not for the current M1000e enclosure)
The Catalyst 3032: a layer 2 switch with 16 internal and 4 external 1Gb Ethernet interfaces with an option to extend to 8 external 1Gb interfaces. The built-in external ports are 10/100/1000 BaseT copper interfaces with an RJ45 connector and up to 4 additional 1Gb ports can be added using the extension module slots that each offer 2 SFP slots for fiber-optic or Twinax 1Gb links. The Catalyst 3032 doesn't offer stacking (virtual blade switching)
The 3130 series switches offer 16 internal 1Gb interfaces towards the blade-servers. For the uplink or external connections there are two options: the 3130G offering 4 built-in 10/100/1000BaseT RJ-45 slots and two module-bays allowing for up to 4 SFP 1Gb slots using SFP transceivers or SFP Twinax cables.
The 3130X also offers the 4 external 10/100/1000BaseT connections and two modules for X2 10Gb uplinks.
Both 3130 switches offer 'stacking' or 'virtual blade switch'. You can stack up to 8 Catalyst 3130 switches to behave like one single switch. This can simplify the management of the switches and simplify the (spanning tree) topology as the combined switches are just one switch for spanning tree considerations. It also allows the network manager to aggregate uplinks from physically different switch-units into one logical link. The 3130 switches come standard with IP Base IOS offering all layer 2 and the basic layer 3 or routing-capabilities. Users can upgrade this basic license to IP Services or IP Advanced services adding additional routing capabilities such as EIGRP, OSPF or BGP4 routing protocols, IPv6 routing and hardware based unicast and multicast routing. These advances features are built into the IOS on the switch, but a user has to upgrade to the IP (Advanced) Services license to unlock these options
Nexus Fabric Extender
Since January 2013 Cisco and Dell offer a Nexus Fabric Extender for the M1000e chassis: Nexus B22Dell. Such FEX's were already available for HP and Fujitsu blade systems, and now there is also a FEX for the M1000e blade system. The release of the B22Dell is approx. 2,5 years after the initially planned and announced date: a disagreement between Dell and Cisco resulted in Cisco stopping the development of the FEX for the M1000e in 2010. Customers manage a FEX from a core Nexus 5500 series switch.
Other I/O cards
An M1000e enclosure can hold up to 6 switches or other I/O cards. Besides the ethernet switches as the Powerconnect M-series, Force10 MXL and Cisco Catalyst 3100 switches mentioned above the following I/O modules are available or usable in a Dell M1000e enclosure:
- Ethernet pass-through modules bring internal server-interfaces to an external interface at the back of the enclosure. There are pass-through modules for 1G, 10G-XAUI and 10G 10GbaseXR. All passthrough modules offer 16 internal interfaces linked to 16 external ports on the module.
- Emulex 4 or 8 Gb Fibre Channel Passthrough Module
- Brocade 5424 8Gb FC switch for Fibre Channel based Storage area network
- Dell 4 or 8Gb Fibre-channel NPIV Port aggregator
- Mellanox 2401G and 4001F/Q - InfiniBand Dual Data Rate or Quad Data Rate modules for High-performance computing
- Infiniscale 4: 16 port 40Gb Infiniband switch
- Cisco M7000e Infiniband switch with 8 external DDR ports
- the below Powerconnect 8428-k switch with 4 "native" 8Gb Fibre channel interfaces:
PCM 8428-k Brocade FCoE
Although the PCM8024-k and MXL switch do support Fibre Channel over Ethernet, it is not a 'native' FCoE switch: it has no Fibre Channel interfaces. These switches would need to be connected to a "native" FCoE switch such as the Powerconnect B-series 8000e (same as a Brocade 8000 switch) or a Cisco Nexus 5000 series switch with fibre channel interfaces (and licenses). The PCM8428 is the only full Fibre Channel over Ethernet capable switch for the M1000e enclosure that offers 16 x enhanced Ethernet 10Gb internal interfaces, 8 x 10Gb (enhanced) Ethernet external ports and also up to four 8Gb Fibre Channel interfaces to connect directly to a FC SAN controller or central Fibre Channel switch.
The switch runs Brocade FC firmware for the fabric and fibre-channel switch and Foundry OS for the Ethernet switch configuration. In capabilities it is very comparable to the Powerconnect-B8000, only the formfactor and number of Ethernet and FC interfaces are different.
PowerConnect M5424 / Brocade 5424
This is a Brocade full Fibre Channel switch. It uses either the B or C fabrics to connect the Fibre Channel mezzanine card in the blades to the FC based storage infrastructure. The M5424 offers 16 internal ports connecting to the FC Mezzanine cards in the blade-servers and 8 external ports. From factory only the first two external ports (17 and 18) are licensed: additional connections would required extra Dynamic Ports On Demand or DPOD licenses. The switch runs on a PowerPC 440EPX processor at 667 MHz. and 512 Mb DDR2 RAM system memory. Further it has 4Mb boot flash and 512 Mb compact flash memory on board
As the 5424, the 4424 is also a Brocade SAN I/O offering 16 internal and 8 external ports. The switch supports speeds up to 4 Gbit/s. When delivered 12 of the ports are licensed to be operation and with additional licenses you can enable all 24 ports. The 4424 runs on a PowerPC 440GP processor at 333 MHz with 256 SDRAM system memory, 4 Mb boot flash and 256 Mb compact flash memory.
There are several modules available offering Infiniband connectivity on the M1000e chassis. Infiniband offers high bandwidth/low-latency intra-computer connectivity such as required in Academic HPC clusters, large enterprise datacenters and cloud applications. There is the SFS M7000e InfiniBand switch from Cisco. The Cisco SFS offers 16 internal 'autosensing' interfaces for single (10) (SDR) or double (20Gbit/s) data rate (DDR) and 8 DDR external/uplink ports. The total switching capacity is 960 Gbit/s
The M4001 switches offer either 40 GBit/s (M4001Q) or the 56 Gbit/s (M4001F) connectivity and has 16 external interfaces using QSFP ports and 16 internal connections to the Infiniband Mezzanine card on the blades. As with all other non-Ethernet based switches it can only be installed in the B or C fabric of the M1000e enclosre as the A fabric connects to the "on motherboard" NICs of the blades and they only come as Ethernet NICs or converged Ethernet.
The 2401G offers 24 ports: 16 internal and 8 external ports. Unlike the M4001 switches where the external ports are using QSFP ports for fiber transceivers, the 2401 has CX4 copper cable interfaces. The switching capacity of the M2401 is 960 Gbit/s
The 4001, with 16 internal and 16 external ports at either 40 or 56 Gbit/s offers a switching capacity of 2.56 Tbit/s
In some setups you don't want or need switching capabilities in your enclosure. For example: if only a few of the blade-servers do use fibre-channel storage you don't need a fully manageble FC switch: you just want to be able to connect the 'internal' FC interface of the blade directly to your (existing) FC infrastructure. A pass-through module has only very limited management capabilities. Other reasons to choose for pass-through instead of 'enclosure switches' could be the wish to have all switching done on a 'one vendor' infrastructure; and if that isn't available as an M1000e module (thus not one of the switches from Dell Powerconnect, Dell Force10 or Cisco) one could go for pass-through modules:
- 32 port 10/100/1000 Mbit/s gigabit Ethernet pass-through card: connects 16 internal Ethernet interfaces (1 per blade) to an external RJ45 10/100/1000 Mbit/s copper port
- 32 port 10 Gb NIC version supports 16 internal 10Gb ports with 16 external SFP+ slots
- 32 port 10 Gb CNA version supports 16 internal 10Gb CNA ports with 16 external CNA's
- Dell 4 or 8Gb Fibre-channel NPIV Port aggregator
- Intel/Qlogic offer a QDR Infiniband passthru module for the Dell M1000e chassis, and a mezzanine version of the QLE7340 QDR IB HCA.
An M1000e enclosure offers several ways for management. The M1000e offers 'out of band' management: a dedicated VLAN (or even physical LAN) for management. The CMC modules in the enclosure offer management Ethernet interfaces and do not rely on network-connections made via I/O switches in the blade. One would normally connect the Ethernet links on the CMC avoiding a switch in the enclosure. Often a physically isolated LAN is created for management allowing management access to all enclosures even when the entire infrastructure is down. Each M1000e chassis can hold two CMC modules.
Each enclosure can have either one or two CMC controllers and by default you can access the CMC Webgui via https and SSH for command-line access. It is also possible to access the enclosure management via a serial port for CLI access or using a local keyboard, mouse and monitor via the iKVM switch. It is possible to daisy-chain several M1000e enclosures.
Below information assumes the use of the Webgui of the M1000e CMC, although all functions are also available via the text-based CLI access. To access the management system you must open the CMC Webgui via https using the out of band management IP address of the CMC. When the enclosure is in 'stand alone' mode you will get a general overview of the entire system: the webgui gives you an overview how the system looks in reality, including the status-leds etc. By default the Ethernet interface of a CMC card will get an address from a DHCP server but it is also possible to configure an IPv4 or IPv6 address via the LED display at the front of the chassis. Once the IP address is set or known the operator can access the webgui using the default root-account that is built in from factory.
Via the CMC management one can configure chassis-related features: management IP addresses, authentication features (local user-list, using RADIUS or Tacacs server), access-options (webgui, cli, serial link, KVM etc.), error-logging (syslog server), etc. Via the CMC interface you can configure blades in the system and configuring iDRAC access to those servers. Once enabled you can access the iDRAC (and with that the console of the server) via this webgui or directly opening the webgui of the iDRAC.
The same applies to the I/O modules in the rear of the system: via the CMC you can assign an IP address to the I/O module in one of the 6 slots and then surf to the webgui of that module (if there is a web-based gui: unmanaged pass-through modules won't offer a webgui as there is nothing to configure.
On the front-side of the chassis there is a small hidden LCD screen with 3 buttons: one 4 way directional button allowing one to navigate through the menus on the screen and two "on/off" push buttons which work as an "OK" or "Escape" button. The screen can be used to check the status of the enclosure and the modules in it: you can for example check active alarms on the system, get the IP address of the CMC of KVM, check the system-names etc. Especially for an environment where there are more enclosures in one datacenter it can be useful to check if you are working on the correct enclosure. Unlike the rack or tower-servers there are only a very limited set of indicators on individual servers: a blade server has a power-led and (local) disc-activity led's but no LCD display offering you any alarms, hostnames etc. Nor are there LED's for I/O activity: this is all combined in this little screen giving you information on both the enclosure as well as information over the inserted servers, switches, fans, power-supplies etc. The LCD screen can also be used for the initial configuration of an unconfigured chassis. You can use the LCD screen to set the interface-language and to set the IP address of the CMC for further CLI or web-based configuration. During normal operation the display can be "pushed" into the chassis and is mainly hidden. To use it one would need to pull it out and tilt it to read the screen and have access to the buttons.
Blade 17: Local management I/O
A blade-system is not really designed for local (on-site) management and nearly all communication with the modules in the enclosure and the enclosure itself are done via the "CMC" card(s) at the back of the enclosure. At the front-side of the chassis, directly adjacent to the power-button, you can connect a local terminal: a standard VGA monitor connector and two USB connectors. This connection is referred to inside the system as 'blade 17' and allows you a local interface to the CMC management cards.
iDRAC remote access
Apart from normal operation access to your blade servers (e.g. SSH sessions to a Linux-based OS, RDP to a Windows-based OS etc.) there are roughly two ways to manage your server blades: via the iDRAC function or via the iKVM switch. Each blade in the enclosure comes with a built-in iDRAC that allows you to access the console over an IP connection. The iDRAC on a blade-server works in the same way as an iDRAC card on a rack or tower-server: there is a special iDRAC network to get access to the iDRAC function. In rack or tower-servers a dedicated iDRAC Ethernet interface connects to a management LAN. On blade-servers it works the same: via the CMC you configure the setup of iDRAC and access to the iDRAC of a blade is NOT linked to any of the on-board NICs: if all your server NICs would be down (thus all the on-motherboard NICs and also the Mezzanine B and C) you can still access the iDRAC.
iKVM: Remote console access
Apart from that, one can also connect a keyboard, mouse and monitor directly to the server: on a rack or tower switch you would either connect the I/O devices when needed or you have all the servers connected to a KVM switch. The same is possible with servers in a blade-enclosure: via the optional iKVM module in an enclosure you can access each of your 16 blades directly. It is possible to include the iKVM switch in an existing network of digital or analog KVM switches. The iKVM switch in the Dell enclosure is an Avocent switch and one can connect (tier) the iKVM module to other digital KVM switches such as the Dell 2161 and 4161 or Avocent DSR digital switches. Also tiering the iKVM to analog KVM switches as the Dell 2160AS or 180AS or other Avocent (compatible) KVM switches is possible. Unlike the CMC, the iKVM switch is not redundant but as one can always access a server (also) via its iDRAC any outage of the KVM switch doesn't stop you from accessing the server-console.
The M1000e enclosure offers the option of flex-addresses. This feature allows the system administrators to use dedicated or fixed MAC addresses and World Wide Names (WWN) that are linked to the chassis, the position of the blade and location of the I/O interface. It allows administrators to physically replace a server-blade and/or a Mezzanine card while the system will continue to use the same MAC addresses and/or WWN for that blade without the need to manually change any MAC or WWN addresses and avoid the risk of introducing duplicate addresses: with flex-addresses the system will assign a globally unique MAC/WWN based on the location of that interface in the chassis. The flex-addresses are stored on a SD-card that is inserted in the CMC module of a chassis and when used it overwrites the address burned in into the interfaces of the blades in the system.
Power and cooling
The M1000e enclosure is, as most blade systems, for IT infrastructures demanding high availability. (Nearly) everything in the enclosure supports redundant operation: each of the 3 I/O fabrics (A, B and C) support two switches or pass-through cards and it supports two CMC controllers, even though you can run the chassis with only one CMC. Also power and cooling is redundant: the chassis supports up to six power-supplies and nine fan units. All power supplies and fan-units are inserted from the back and are all hot-swappable. The power-supplies are located at the bottom of the enclosure while the fan-units are located next to and in between the switch or I/O modules. Each power-supply is a 2700 Watt power-supply and uses 208-240 V AC as input voltage. A chassis can run with at least 2 power-supplies (2+0 non-redundant configuration). Depending on the required redundancy you can use a 2+2 or 3+3 setup (input redundancy where you would connect each group of supplies to two different power sources) or a 3+1, 4+2 or 5+1 setup, which gives protection if one power-supply unit would fail - but not for losing an entire AC power group
- Dell website Tech specs for the M1000e, visited 10 March 2013
- Dell support website M1000e owners manual, retrieved 26 October 2012
- PowerEdge M1000e Installation Guide, Revision A05, page 47-51. Date: March 2011. Retrieved: 25 January 2013
- BladesMadeSimple website on Poweredge M420, 22 May 2012. Visited: 20 February 2013
- Dell website on the PowerEdge M420, visited 17 July 2012
- Dell website: Poweredge M630 Technical specifications, visited 29 August 2016.
- Overview of technical specifications of the Poweredge M620, visited 12 June 2012
- Dell website announcing G12 servers with details on virtualisation, visited 12 June 2012
- Dell website: Poweredge M820 Technical specifications, visited 28 July 2012
- Tech Specs brochure PowerEdge M610, updated 20 December 2011
- Technical specs of the Dell PowerEdge M610x, retrieved 20 December 2011
- Tech Specs brochure PowerEdge M710, retrieved 27 June 2011
- Tech Specs for Power Edge PowerEdge M710HD, retrieved 20 December 2011
- Technical specs on the:M910, retrieved 20 December 2011
- Dell website with technical specification of the M915 blade, retrieved 20 December 2011
- Footnote:Except the PE M420 which only supports one Mezzanine card: The PE M420 quarter height blade server only has a Mezzanine B slot
- Dell support site with an Overview manuals for the M1000e chassis, visited 27 June 2011
- Whitepaper on redundant SD card installation of Hypervisors, visited 19 February 2013
- Technical specifications of the Equallogic PS M4110 blade array, visited 27 September 2012
- Dell datasheet for the PS-M4110, downloaded: 2 March 2013
- Using M1000e System with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application, retrieved 12 June 2012
- How to find midplane revision of M1000e, visited 19 September 2012
- PowerConnect M-series User Guide, firmware 4.x, March 2011, retrieved 26 June 2011
- Dell website: available blade-switches PCM8024 not listed as available, 29 December 2011
- Dell website PCM8024-k, visited 29 December 2012
- Release notes page 6 and further included in firmware package PC 188.8.131.52, release-date 2 February 2012, downloaded: 16 February 2012
- Stacking the PowerConnect 10G switches, December 2011. Visited 10 March 2013
- PCM6348 User Configuration Guide, downloaded 10 March 2013
- Dell community website: Dell announces F10 MXL switch, 24 April 2012. Visited 18 May 2012
- EWeek: Dell unveils 40GbE Enabled networking switch, 24 April 2012. Visited 18 May 2012
- Dell website: PowerEdge M I/O Aggregator, August, 2012. Visited: 26 October 2012
- Cisco website: Comprehensive Blade Server I/O Solutions, visited: 14 April 2012
- Catalyst 3032 for Dell, visited: 14 April 2012
- Catalyst for Dell at a glance, retrieved: 14 April 2012
- Dell website Catalyst 3130G, visited 14 April 2012
- Dell website on Catalyst 3130X, visited 14 April 2012
- Cisco datasheet on the Catalyst 3130, section: 3130 software. Visited: 14 April 2012
- TheRegister website: Cisco cuts Nexus 4001d blade switch, 16 February 2010. Visited: 10 March 2013
- Cisco datasheet: Cisco Nexus B22 Blade Fabric Extender Data Sheet, 2013. Downloaded: 10 March 2013
- Manuals and Documents for PowerEdge M1000E, visited 9 March 2013
- Usermanual for the 10GbE XAUI passthrough module, 2010, visited: 10 March 2013
- Usermanual for the 10 Gb passthrough -k for M1000e, 2011. Visited: 10 March 2013
- Userguide for the Infiniscal IV, 2009. Downloaded: 10 March 2013
- Dell website Specifications of the M8424 Converged 10Gbe switch, visited 12 October 2012
- Details on the PC-B-8000 switch, visited 18 March 2012
- Dell website:Brocade M5424 Blade Server SAN I/O Module Hardware Reference Manual, September, 2008. downloaded 12 October 2012.
- Dell manual: Brocade 4424 Blade Server SAN I/O Module Hardware Reference, November 2007. Downloaded: 12 October 2012
- News, NO: IDG
- Cisco datasheet on the SFS M7000e Infiniband switch, March 2008. Visited: 12 October 2012
- Melanox Userguide for the SwitcX M4001 M4001 Infiniband switches, November, 2011. Retrieved: 12 October 2012
- Melanox userguide for the M2401 Infiniband switch, June, 2008. Visited: 12 October 2012
- Dell website Gigabit passthrough module for M-series, visited 26 June 2011
- 10Gb Pass Through Specifications, PDF, retrieved 27 June 2011