{{ 'fb_in_app_browser_popup.desc' | translate }} {{ 'fb_in_app_browser_popup.copy_link' | translate }}

{{ 'in_app_browser_popup.desc' | translate }}

NVIDIA C8220 (900-9X81Q-00CV-ST0) ConnectX-8 SuperNIC with PCIe 5 x16, HHHL, Tall Bracket

NVIDIA C8220 (900-9X81Q-00CV-ST0) ConnectX-8 SuperNIC with PCIe 5 x16, HHHL, Tall Bracket

The NVIDIA® ConnectX®-8 SuperNIC™ is optimized to supercharge hyperscale AI computing workloads.


Free Shipping on order

{{shoplineProductReview.avg_score}} {{'product.product_review.stars' | translate}} | {{shoplineProductReview.total}} {{'product.product_review.reviews' | translate}}
{{amazonProductReview.avg_rating}} {{'product.product_review.stars' | translate}} | {{amazonProductReview.total_comment_count}} {{'product.product_review.reviews' | translate}}
Quantity Product set quantity
The maximum quantity per submit is 99999
This quantity is invalid, please enter a valid quantity.
Sold Out

Not enough stock.
Your item was not added to your cart.

Not enough stock.
Please adjust your quantity.

{{'products.quick_cart.out_of_number_hint'| translate}}

{{'product.preorder_limit.hint'| translate}}

Limit {{ product.max_order_quantity }} per order.

Only {{ quantityOfStock }} item(s) left.

Please message the shop owner for order details.
Description
Shipping & Payment
Customer Reviews
Description

Introduction

The NVIDIA® ConnectX®-8 SuperNIC™ is optimized to supercharge hyperscale AI computing workloads. With support for both InfiniBand and Ethernet networking at up to 800 gigabits per second (Gb/s), ConnectX-8 SuperNIC delivers high-speed, efficient network connectivity, significantly enhancing system performance for AI factories and cloud data center environments.

Powerful Networking for Generative AI

Central to NVIDIA’s AI networking portfolio, ConnectX-8 SuperNICs fuel the next wave of innovation in forming accelerated, massive-scale AI compute fabrics. They seamlessly integrate with next-gen NVIDIA networking platforms, providing end-

to-end 800Gb/s connectivity. These platforms offer the robustness, feature sets, and scalability required for trillion-parameter GPU computing and generative AI applications.

With enhanced power efficiency, ConnectX-8 SuperNICs support the creation of sustainable AI data centers operating hundreds of thousands of GPUs, ensuring a future-ready infrastructure for AI advancements.

ConnectX-8 SuperNICs enable advanced routing and telemetry-based congestion control capabilities, achieving the highest network performance and peak AI workload efficiency. Additionally, ConnectX-8 InfiniBand SuperNICs extend the

capabilities of NVIDIA® Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ to boost In-network computing in high-performance computing environments, further enhancing overall efficiency and performance.

Port Splitting

ConnectX-8 SuperNICs offer a variety of network port configurations designed to meet the demands of different environments and deployments.

The Port Splitting feature allows a single physical networking module (QSFP112 or OSFP) to be split into multiple network ports. This provides flexibility in optimizing port configurations for various network topology use cases. For the supported OPNs and configurations, refer to Port Splitting Configurations.

There are two available extension options:

  1. For C8180 SuperNICs: Utilizing the Socket-Direct/Multi-Host capability, where the PCIe extension card is connected to the SuperNIC, and is used as an end-point.

  2. For C8240 and C8220 SuperNICs : Utilizing the Down Stream Port (DSP) option, where the MCIO connector is used as a root complex for storage devices (GPUs or SSDs).

Socket Direct SuperNICs

The Socket Direct™ technology offers improved performance to dual-socket servers by enabling direct access from each CPU in a dual-socket server to the network through its dedicated PCIe interface. Utilizing the Socket-Direct or the Multi-Host capability, the PCIe extension card is connected to the SuperNIC and is used as an end-point extension.

NVIDIA offers ConnectX-8 Socket Direct, which enables 800Gb/s or 400Gb/s connectivity for servers with PCIe Gen5 or Gen4 capability, respectively. The SuperNIC's 32-lane PCIe bus is split into two 16-lane buses, with one bus accessible through a PCIe x16 edge connector and the other bus through an x16 Auxiliary PCIe Connection card. The two cards should be installed into two PCIe x16 slots and connected using an MCIO harness.

Please order the additional PCIe Auxiliary Card kit to use the SuperNIC in the Socket-Direct configuration. SuperNICs that support Socket Direct can function as separate x16 PCIe cards.

For more information, please refer to the PCIe Auxiliary Card Kit.

Down Stream Port (DSP)

The ConnectX-8 SuperNIC with downstream port extension option provides connectivity to the server backplane or PCIe switch through the MCIO connector.

The default PCI interface is x4 x 4 to manage four SSD devices.

Item

Description

PCI Express Slot

In PCIe x16 Configuration

PCIe Gen6 @ 64GT/s through x16 edge connector

In PCIe x16 Extension Option (Socket Direct or Switch DSP)

  • PCIe Gen5 SERDES @32GT/s through edge connector
  • PCIe Gen5 SERDES @32GT/s through PCIe Auxiliary Connection Card or SFF-TA-1016 MCIO

System Power Supply

Refer to Specifications

Operating System

  • In-box drivers for major operating systems:

    Linux: RHEL, Ubuntu
  • Windows
  • DOCA Host
  • OpenFabrics Windows Distribution (WinOF-2)

Connectivity

  • Interoperable with 25/100/200/400 Gb/s Ethernet switches and SDR/EDR/HDR100/HDR/NDR/XDR InfiniBand switches
  • Passive copper cable with ESD protection
  • Powered connectors for optical and active cable support
Shipping & Payment

Delivery Options

  • SF-Express
  • FedEx Express

Payment Options

  • WeChat Pay
  • Alipay (HK)_SHOPLINE Payments
  • Google Pay
  • Apple Pay
  • Credit Card
  • PayPal
  • Bank Transfer
Customer Reviews
{{'product.product_review.no_review' | translate}}

Related Products