Call a Specialist Today! 833-335-0426
Free Shipping! Free Shipping!

Mellanox M3601Q InfiniBand Switch Blade
Performance and efficiency in a cost-effective solution

Mellanox M3601Q InfiniBand Switch Blade

Dell Networking Products
Infiniband Blade Switches
USA: FREE Ground ShippingMellanox M3601Q InfiniBand Switch Blade
Call for Pricing!

Click here to jump to more pricing!

Overview:

High-bandwidth, low-latency fabric for enterprise data centers and high-performance-computing (HPC) environments.

  • Powerful, high-performance server blade switches
  • Full cross-sectional bandwidth from server blades to fabric
  • Reliable transport
  • I/O consolidation
  • Virtualization acceleration

The InfiniScale IV M3601Q 40Gb/s InfiniBand Blade Switch I/O Module for PowerEdge M-Series provides a high bandwidth, low latency fabric for Enterprise Data Center and High-Performance Computing environments. Based on the fourth generation InfiniScale IV InfiniBand switch device, the I/O module delivers up to 40Gb/s full bisectional bandwidth per port. When used in conjunction with ConnectX InfiniBand dual port mezzanine I/O cards, clustered data bases, parallelized applications and transactional services applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Sustained Network Performance

The Mellanox M3601Q from Dell™ supports static routing to reduce or eliminate network congestion. For conditions in which output ports are oversubscribed, the M3601Q supports InfiniBand® Trade Association (IBTA) 1.2 congestion control mechanisms. The switch system works together with the ConnectX® I/O host channel adapter (HCA) to restrict the traffic causing congestion and ensure high bandwidth and low latency to all other flows. Whether used for parallel computation or as a converged fabric, the combination of high bandwidth, adaptive or static routing, and congestion control provides excellent traffic-carrying capacity.

Easy to Manage

The M3601Q is easily managed through any IBTA-compliant subnet manager. Mellanox recommends installing and running the OpenFabrics software stack on each server blade. Any server can then run the subnet manager, along with switch management tools. Port configuration and data paths can be set up automatically, or customized to meet the needs of the application. And you can update firmware in-band, for simple network maintenance.

Transform your Dell M1000e blade server enclosure

Infiniband Blade Switches:

Dell Networking Products
Infiniband Blade Switches
Mellanox InfiniBand Blade Switch

Mellanox InfiniBand Blade Switch
Enable high bandwidth and low latency across InfiniBand-connected server nodes in your M1000e blade chassis high-performance computing (HPC) cluster.

Mellanox M3601Q Infiniband Switch Blade

Mellanox M3601Q Infiniband Switch Blade
Ultra-high bandwidth and low latency demands for enterprise data centers and high-performance-computing (HPC) environments.

Mellanox M2401G Infiniband Switch Blade

Mellanox M2401G Infiniband Switch Blade
High bandwidth, low latency and low power demands for enterprise data centers and high-performance-computing (HPC) environments.

Specifications:

Port Attributes

  • 32 auto-negotiating 10, 20, 40Gb/s fully non-blocking InfiniBand ports
  • 16 internal server ports 
  • Up-to 16 external QSFP connector ports for use with uplink cables
    • Passive copper cable
    • Optical media adapter and active cable support
  • Port and system status LED indicators
    • Per port status LEDs: Link, Activity
    • System status LEDs: System status, power

Performance

  • Simultaneous wire-speed any port to any port
  • Total switching bandwidth 2.56Tb/s
  • Up to 16K Multicase Addresses per Subnet
  • Up-to 48K Unicast Addresses Max per Subnet

Quality of Service

  • Fine grained end-to-end QoS
  • Advanced scheduling engine supports quality of service for up to 9 traffic classes 9 virtual lanes
  • 8 data
  • 1 management

Other Switching

  • IBTA Specification 1.2 compliant
  • Integrated subnet manager agent
  • Linear forwarding table
  • 256 to 4Kbyte MTU
  • 48K entry linear forwarding data base
  • Hot-swappable, enabling expansion without server interruption
  • Adaptive Routing

Management

  • Diagnostic and debug tools 
  • Port mirroring 
  • Unmanaged switch supports OpenSM or third-party subnet managers 

    NOTE: Mellanox OpenFabrics Enterprise Distribution (OFED) software stack contains a subnet manager along with switch management tools

Chassis

  • 272.54 x58.98 x 255.02 mm (H x W x D)
  • 10.73 x 2.32 x 10.04 inches

Standards Supported

  • Safety
    • US/Canada: cTUVus 
    • EU: IEC60950 
    • International: CB
  • EMC (Emissions)
    • USA: FCC, Class A
    • Canada: ICES, Class A
    • EU: CE Mark (EN55022 Class A, EN55024, EN61000-3-2, EN61000-3-3)
    • Japan: VCCI, Class A
    • Korea: RRL (MIC), Class A
    • Australia/New Zealand: C-Tick Class A
  • Environmental
    • EU: IEC 60068-2-64: Random Vibration 
    • EU: IEC 60068-2-29: Shocks, Type I / II 
    • EU: IEC 60068-2-32: Fall Test

Environmental Operating Conditions

  • Operating Temperature: 0º C to 40º C
  • Operating Relative Humidity: 10% to 90% non-condensing 

Maximum Power Consumption

  • 125W: 
  • Dissipated power: 85W 
  • Power through connector: 2W per port

Pricing Notes:

Dell Networking Products
Infiniband Blade Switches
USA: FREE Ground ShippingMellanox M3601Q InfiniBand Switch Blade
Call for Pricing!