{{ 'fb_in_app_browser_popup.desc' | translate }} {{ 'fb_in_app_browser_popup.copy_link' | translate }}
{{ 'in_app_browser_popup.desc' | translate }}
NVIDIA A100-80G, a top-tier GPU, excels in AI, data analytics, and scientific computing. It empowers data centers with robust performance. Ideal for engineers and researchers, this GPU significantly enhances processing power and efficiency in various computational tasks.
商品存貨不足,未能加入購物車
您所填寫的商品數量超過庫存
{{'products.quick_cart.out_of_number_hint'| translate}}
{{'product.preorder_limit.hint'| translate}}
每筆訂單限購 {{ product.max_order_quantity }} 件
現庫存只剩下 {{ quantityOfStock }} 件
NVIDIA A100-80G, a top-tier GPU, excels in AI, data analytics, and scientific computing. It empowers data centers with robust performance. Ideal for engineers and researchers, this GPU significantly enhances processing power and efficiency in various computational tasks.
Feature | Nvidia A100-80G | Nvidia V100 | AMD Instinct MI100 | Tesla T4 | Nvidia A40 | Google TPUs | Nvidia RTX 8000 |
---|---|---|---|---|---|---|---|
Memory Capacity | 80GB HBM2e | 32GB HBM2 | 32GB HBM2 | 16GB GDDR6 | 48GB GDDR6 | N/A | 48GB GDDR6 |
Tensor Cores | 640 | 640 | 768 | 320 | 336 | 128 | 4608 CUDA |
FP64 Performance | 9.7 TFLOPS | 7.8 TFLOPS | 11.5 TFLOPS | 0.13 TFLOPS | 3.8 TFLOPS | N/A | 0.6 TFLOPS |
Peak Performance | 312 TFLOPS | 125 TFLOPS | 184.6 TFLOPS | 8.1 TFLOPS | 40 TFLOPS | 100+ PFLOPS | 16.3 TFLOPS |
Target Use | AI, Analytics | Deep Learning | AI, HPC | Inference | Rendering | AI, ML | Graphics |
Model | Description |
---|---|
ConnectX-6 VPI | Advanced network adapter for increased connectivity |
NVSwitch | Network switch for multi-GPU scalability |
NVIDIA NGC | Comprehensive GPU cloud software suite |
Nvidia A100-80G Specifications | |
CUDA Cores | 6912 |
Memory Size | 80 GB |
Memory Type | HBM2e |
Memory Bandwidth | 2039 GB/s |
TFLOPS (FP32) | 19.5 |
TFLOPS (FP64) | 9.7 |
Datatype Support | FP32, FP64, INT8, INT4, INT1 |
NVLink Bandwidth | 600 GB/s |
Form Factor | FHFL (Full-height, full-length) |
Interface | PCIe 4.0 x16 |
Power Consumption | 400W |
Manufacturing Process | 7nm |
Power Connectors | 8-pin |
Max GPU Temperature | 85°C |
Cooling Solution | Passive |
Slot Width | Double Slot |
Tensor Cores | 432 |
GPU Architecture | Ampere |
Deep Learning Acceleration | Yes |
Driver Support | Windows, Linux |