{{ 'fb_in_app_browser_popup.desc' | translate }} {{ 'fb_in_app_browser_popup.copy_link' | translate }}
{{ 'in_app_browser_popup.desc' | translate }}
The Mellanox MCX556A-ECAT (now part of NVIDIA) is a high-performance, intelligent adapter card from the ConnectX-5 family.
全店,Free Shipping
商品存貨不足,未能加入購物車
您所填寫的商品數量超過庫存
{{'products.quick_cart.out_of_number_hint'| translate}}
{{'product.preorder_limit.hint'| translate}}
每筆訂單限購 {{ product.max_order_quantity }} 件
現庫存只剩下 {{ quantityOfStock }} 件
The Mellanox MCX556A-ECAT (now part of NVIDIA) is a high-performance, intelligent adapter card from the ConnectX-5 family. Designed to meet the demands of the most data-intensive applications, this card supports Virtual Protocol Interconnect (VPI) technology, offering unparalleled flexibility by supporting both 100Gb/s InfiniBand (EDR) and 100/50/40/25/10Gb/s Ethernet on a single, dual-port adapter .
Leveraging a PCIe 3.0 x16 host interface, the ConnectX-5 VPI adapter delivers industry-leading throughput, extremely low latency (sub-600ns), and a high message rate of up to 200 million messages per second . It is the ideal solution for High-Performance Computing (HPC), machine learning, artificial intelligence (AI/AI/ML), cloud computing, Web 2.0, data analytics, and storage platforms .
VPI (Virtual Protocol Interconnect) Flexibility: The card can operate natively in either InfiniBand or Ethernet mode, allowing data centers to future-proof their infrastructure and switch protocols without hardware changes .
Intelligent Offloads & In-Network Computing: As part of the Mellanox Smart Interconnect suite, the ConnectX-5 features acceleration engines that free up the host CPU. Key offloads include:
NVMe over Fabric (NVMe-oF) Offload: Enables high-performance, efficient NVMe storage access over the network .
ASAP2 (Accelerated Switching and Packet Processing) Technology: Offloads virtual switching (like Open vSwitch) to the hardware, dramatically improving network performance in virtualized and NFV environments .
MPI Tag Matching and AlltoAll Offload: Hardware-based acceleration for Message Passing Interface (MPI) operations significantly boosts HPC application performance .
RDMA and GPUDirect Support: Native support for RDMA over Converged Ethernet (RoCE) and InfiniBand, combined with Mellanox PeerDirect (GPUDirect) technology, provides high-speed, direct communication between the adapter and GPU memory, bypassing the CPU for accelerated AI and scientific computing .
Host Chaining Technology: Allows for the creation of torus and hypercube network topologies without the need for external leaf switches, reducing cost and latency in rack-level designs .
Advanced Storage Capabilities: Includes hardware offloads for Erasure Coding (Reed-Solomon) and T10-DIF signature handover, accelerating data protection and integrity checks for distributed storage systems .
High-Performance Computing (HPC): Accelerates simulations, modeling, and research with MPI offloads and ultra-low latency .
Artificial Intelligence (AI) and Machine Learning (ML): Ideal for distributed training and inference, leveraging GPUDirect and high throughput .
Enterprise and Cloud Data Centers: Provides virtualization offloads (SR-IOV, ASAP2) and flexible VPI connectivity for private and public clouds .
Storage and Databases: Enables high-performance NVMe-oF storage arrays and fast access to data lakes .
Financial Services (High-Frequency Trading): Utilizes deterministic low latency for trade execution and market data delivery .
This adapter combines the robust feature set of the ConnectX-5 architecture with the flexibility of VPI, making it a versatile and powerful choice for modern, high-speed networking environments.