Mellanox InfiniBand Blade Switch
Accelerate throughput with InfiniBand blade switches
Click here to jump to more pricing!
Enable high bandwidth and low latency across InfiniBand-connected server nodes in your M1000e blade chassis high-performance computing (HPC) cluster.
- Single-wide switches occupy only one slot, leaving availability for fabric redundancy.
- InfiniBand™ Trade Association (IBTA) management support allows easy management through any IBTA-compliant subnet manager.
- Non-blocking throughput is enabled through 16 dedicated internal and 16 external ports.
Add high-bandwidth, low-latency InfiniBand switches to your Dell M1000e blade chassis.
Get the most data throughput available in a Dell M1000e blade chassis with a Mellanox InfiniBand blade switch. Designed for low-latency and high-bandwidth applications in high performance computing (HPC) and high-performance data center environments, InfiniBand switches offer 16 internal and 16 external ports to help eliminate the bottlenecks common to other switch designs.
Choose from three single-wide Mellanox InfiniBand blade switches, each offering non-blocking throughput and IBTA management compatibility:
Mellanox M4001Q QDR InfiniBand Blade Switch
Per-port bit rate: 40 Gb/s
Per-port data throughput: 32 Gb/s
Mellanox M4001T FDR10 InfiniBand Blade Switch
Per-port bit rate: 41.25 Gb/s
Per-port data throughput: 40 Gb/s
Mellanox M4001F FDR InfiniBand Blade Switch
Per-port bit rate: 56.25 Gb/s
Per-port data throughput: 54.54 Gb/s
Infiniband Blade Switches:
|Dell Networking Products|
|Infiniband Blade Switches|
Mellanox InfiniBand Blade Switch
Mellanox M3601Q Infiniband Switch Blade
Mellanox M2401G Infiniband Switch Blade
Legend: S — Standard, OA — Option Available, N — Not Available
|Mellanox M4001Q QDR||Mellanox M4001T FDR10||Mellanox M4001F FDR|
|Port Attributes||32 auto-negotiating 10, 20, 40Gb/s fully nonblocking InfiniBand ports; 16 internal server ports; up to 16 external QSFP connector ports for use with uplink cables||32 auto-negotiating 10, 20, 40Gb/s fully nonblocking InfiniBand ports; 16 internal server ports; up to 16 external QSFP connector ports for use with uplink cables||32 auto-negotiating 10, 20, 40, 56Gb/s fully nonblocking InfiniBand ports; 16 internal server ports; up to 16 external QSFP connector ports for use with uplink cables|
|Passive copper cable||S||S||S|
|Optical media adapter and active cable support||S||S||S|
|Port and system status LED indicators||S||S||S|
|Per port status LEDs: Link, activity||S||S||S|
|System status LEDs: System status, power||S||S||S|
|Simultaneous wire-speed any port to any port||S||S||S|
|Single port transmit and receive bit rate||40.0Gb/s||41.25Gb/s||56.25Gb/s|
|Single port transmit and receive effective data rate||32.0Gb/s||40.0Gb/s||54.54Gb/s|
|Total switching bandwidth||2.560Tb/s||2.560Tb/s||3.584Tb/s|
|Quality of Service|
|Fine grained end-to-end QoS||S||S||S|
|Advanced scheduling engine supports quality of service for up to 9 traffic classes 9 virtual lanes||S||S||S|
|IBTA Specification||1.21 compliant||N||1.3 compliant|
|Integrated subnet manager agent||S||S||S|
|Linear forwarding table||S||S||S|
|256 to 4Kb MTU||S||S||S|
|48K entry linear forwarding data base||S||S||S|
|Hot-swappable, enabling expansion without server interruption||S||S||S|
|Diagnostic and debug tools||S||S||S|
|Unmanaged switch supports OpenSM or third-party subnet managers||S||S||S|
|Height x Width x Depth||272.54 x 29.45 x 255.02 mm||272.54 x 29.45 x 255.02 mm||272.54 x 29.45 x 255.02 mm|
|Inches||10.73 x 1.16 x 10.04 in.||10.73 x 1.16 x 10.04 in.||10.73 x 1.16 x 10.04 in.|
Note: Mellanox OpenFabrics Enterprise Distribution (OFED) software stack contains a subnet manager along with switch management tools
Download the Dell PowerEdge M Series Blades IO Guide Datasheet (PDF).
- Pricing and product availability subject to change without notice.