Mellanox InfiniBand Switches
Building small to large clusters using low-latency, high-throughput 100Gbps+ technologies.
Mellanox SX6012
The SX6012 is an ideal choice for high performance in smaller departmental or back-end clustering, such as storage, database and GPGPU clusters.
Key features include:
- FDR-10 link speed supporting 20% more bandwidth than QDR
- Onboard subnet manager for out-of-the-box fabric bring-up for up to 648 nodes
- Virtual protocol interconnect (VPI) to simplify data center I/O design
Mellanox SX6025
The SX6025 is an ideal choice for top-of-rack leaf connectivity or for building small- to extremely large-sized clusters.
Key features include:
- Increased computing efficiency via static routing, adaptive routing and congestion control
- Reversible airflow for data centers with different thermal designs
- Cost-efficiency for building high-performance clusters and data centers
Mellanox SX6036
The SX6036 is an ideal choice for top-of-rack leaf connectivity or for building small- to medium-sized clusters.
Key features include:
- Delivers low port-to-port latency and up to 4Tb/s of non-blocking bandwidth for high-performance computing (HPC) environments and enterprise data centers (EDC)
- Virtual protocol interconnect (VPI) that simplifies systems by serving multiple fabrics
- Enhanced Ethernet connectivity for Dell PowerEdge PCIe Gen3 servers
Mellanox SB7800
The SB7800 provides in-network computing through Co-Design Scalable Hierarchical Aggregation Protocol (SHArP) technology which helps deliver high fabric performance of up to 7Tb/s of managed non-blocking bandwidth with 90ns port-to-port latency. The SB7800 provides up to 100Gb/s full bi-directional bandwidth per port making it an ideal choice for high-performance needs, such as storage, database and GPGPU clusters. SB7800 enables efficient computing with features such as static routing, adaptive routing and congestion control. These ensure the maximum effective fabric bandwidth by eliminating congestion hot spots. An onboard subnet manager enables simple, out-of-the-box fabric bring-up for up to 2000 nodes while MLNX-OS® software delivers management for firmware, power supplies, ports and other interfaces.
Mellanox SB7890
The SB7890 provides in-network computing through Co-Design Scalable Hierarchical Aggregation Protocol (SHArP) technology which helps deliver high fabric performance of up to 7Tb/s of non-blocking bandwidth with 90ns port-to-port latency. SB7800 provides up to 100Gb/s full bi-directional bandwidth per port. It’s an ideal high-performance choice for storage, database and GPGPU clusters.