{{ 'fb_in_app_browser_popup.desc' | translate }} {{ 'fb_in_app_browser_popup.copy_link' | translate }}

{{ 'in_app_browser_popup.desc' | translate }}

Nvidia A100-40G GPU

Nvidia A100-40G GPU

The NVIDIA A100-40G, a data center-grade AI inference and training GPU, offers optimized performance for mid-range computational needs.


HK$122,000.00
{{shoplineProductReview.avg_score}} {{'product.product_review.stars' | translate}} | {{shoplineProductReview.total}} {{'product.product_review.reviews' | translate}}
{{amazonProductReview.avg_rating}} {{'product.product_review.stars' | translate}} | {{amazonProductReview.total_comment_count}} {{'product.product_review.reviews' | translate}}
數量 組合數量
一次最大商品購買數量限制為 99999
該數量不適用,請填入有效的數量。
售完

商品存貨不足,未能加入購物車

您所填寫的商品數量超過庫存

{{'products.quick_cart.out_of_number_hint'| translate}}

{{'product.preorder_limit.hint'| translate}}

每筆訂單限購 {{ product.max_order_quantity }} 件

現庫存只剩下 {{ quantityOfStock }} 件

若想購買,請聯絡我們。
商品描述
送貨及付款方式
顧客評價
商品描述

Nvidia A100-40G Overview

The NVIDIA A100-40G, a data center-grade AI inference and training GPU, offers optimized performance for mid-range computational needs. This powerful processor ensures efficient performance, making it ideal for engineers and data scientists tackling complex machine learning tasks and other demanding workflows.


Nvidia A100-40G Product Highlights
  • Efficient AI Inference
  • Strong Training Capability
  • Data Center Optimization
  • Mid-range Computational Power
  • High Performance GPU

Product Features Comparison
Features Nvidia A100-40G Nvidia A100-80G Nvidia V100 AMD Instinct MI100 Google TPU v4 Intel Habana Gaudi Nvidia T4
Memory Size 40 GB 80 GB 32 GB 32 GB 16 GB 32 GB 16 GB
Memory Bandwidth 1,555 GB/s 2,039 GB/s 900 GB/s 1,232 GB/s 700 GB/s 1,023 GB/s 300 GB/s
Processing Power 19.5 TFLOPS 15.7 TFLOPS 14 TFLOPS 11.5 TFLOPS N/A N/A 8.1 TFLOPS
Inference Efficiency High High Medium Low Medium Medium Medium
Training Performance Optimal Optimal Strong Mid-tier High High Low
Use Case Data Centers Heavy Workloads Research Enterprise Cloud Services AI Development Small Data Centers

Nvidia A100-40G Product Application Scenarios
  • AI Model Training
  • Data Analytics
  • Scientific Computing

Optional Add-ons
Accessory Model Description
HGX A100 4-GPU Baseboard Multi-GPU expansion for larger workloads
Mellanox ConnectX-6 VPI High-speed networking adapter
NVIDIA NVSwitch Interconnect for seamless multi-GPU communication

Nvidia A100-40G Specification

Nvidia A100-40G Specifications

Model A100-40G
Memory 40 GB HBM2
Memory Bandwidth 1.6 TB/s
CUDA Cores 6,912
Tensor Cores 432
TDP 400W
Architecture Ampere GA100
NVLink Bandwidth 600 GB/s
Multi-Instance GPU (MIG) Yes, Up to 7 instances
Process Technology 7nm
Base Clock Speed 765 MHz
Boost Clock Speed 1410 MHz
PCI Express Generation PCIe 4.0
DirectX 12.0
OpenGL 4.6
Form Factor Dual-slot, full-height
Interface PCIe 4.0 x16
Number of GPUs 1
Max GPU Temperature 85°C
Display Support N/A (Data Center GPU)
Cooling Solution Passive
送貨及付款方式

送貨方式

  • 順豐速運
  • 聯邦快遞

付款方式

  • 微信支付
  • 支付寶 (HK)_SHOPLINE Payments
  • Google Pay
  • Apple Pay
  • 信用卡
  • PayPal
  • 銀行轉帳/ATM
顧客評價
{{'product.product_review.no_review' | translate}}

相關產品