{{ 'fb_in_app_browser_popup.desc' | translate }} {{ 'fb_in_app_browser_popup.copy_link' | translate }}

{{ 'in_app_browser_popup.desc' | translate }}

Nvidia A100-40G GPU

Nvidia A100-40G GPU

The NVIDIA A100-40G, a data center-grade AI inference and training GPU, offers optimized performance for mid-range computational needs.


HK$122,000.00
{{shoplineProductReview.avg_score}} {{'product.product_review.stars' | translate}} | {{shoplineProductReview.total}} {{'product.product_review.reviews' | translate}}
{{amazonProductReview.avg_rating}} {{'product.product_review.stars' | translate}} | {{amazonProductReview.total_comment_count}} {{'product.product_review.reviews' | translate}}
Quantity Product set quantity
The maximum quantity per submit is 99999
This quantity is invalid, please enter a valid quantity.
Sold Out

Not enough stock.
Your item was not added to your cart.

Not enough stock.
Please adjust your quantity.

{{'products.quick_cart.out_of_number_hint'| translate}}

{{'product.preorder_limit.hint'| translate}}

Limit {{ product.max_order_quantity }} per order.

Only {{ quantityOfStock }} item(s) left.

Please message the shop owner for order details.
Description
Shipping & Payment
Customer Reviews
Description

Nvidia A100-40G Overview

The NVIDIA A100-40G, a data center-grade AI inference and training GPU, offers optimized performance for mid-range computational needs. This powerful processor ensures efficient performance, making it ideal for engineers and data scientists tackling complex machine learning tasks and other demanding workflows.


Nvidia A100-40G Product Highlights
  • Efficient AI Inference
  • Strong Training Capability
  • Data Center Optimization
  • Mid-range Computational Power
  • High Performance GPU

Product Features Comparison
Features Nvidia A100-40G Nvidia A100-80G Nvidia V100 AMD Instinct MI100 Google TPU v4 Intel Habana Gaudi Nvidia T4
Memory Size 40 GB 80 GB 32 GB 32 GB 16 GB 32 GB 16 GB
Memory Bandwidth 1,555 GB/s 2,039 GB/s 900 GB/s 1,232 GB/s 700 GB/s 1,023 GB/s 300 GB/s
Processing Power 19.5 TFLOPS 15.7 TFLOPS 14 TFLOPS 11.5 TFLOPS N/A N/A 8.1 TFLOPS
Inference Efficiency High High Medium Low Medium Medium Medium
Training Performance Optimal Optimal Strong Mid-tier High High Low
Use Case Data Centers Heavy Workloads Research Enterprise Cloud Services AI Development Small Data Centers

Nvidia A100-40G Product Application Scenarios
  • AI Model Training
  • Data Analytics
  • Scientific Computing

Optional Add-ons
Accessory Model Description
HGX A100 4-GPU Baseboard Multi-GPU expansion for larger workloads
Mellanox ConnectX-6 VPI High-speed networking adapter
NVIDIA NVSwitch Interconnect for seamless multi-GPU communication

Nvidia A100-40G Specification

Nvidia A100-40G Specifications

Model A100-40G
Memory 40 GB HBM2
Memory Bandwidth 1.6 TB/s
CUDA Cores 6,912
Tensor Cores 432
TDP 400W
Architecture Ampere GA100
NVLink Bandwidth 600 GB/s
Multi-Instance GPU (MIG) Yes, Up to 7 instances
Process Technology 7nm
Base Clock Speed 765 MHz
Boost Clock Speed 1410 MHz
PCI Express Generation PCIe 4.0
DirectX 12.0
OpenGL 4.6
Form Factor Dual-slot, full-height
Interface PCIe 4.0 x16
Number of GPUs 1
Max GPU Temperature 85°C
Display Support N/A (Data Center GPU)
Cooling Solution Passive
Shipping & Payment

Delivery Options

  • SF-Express
  • FedEx Express

Payment Options

  • WeChat Pay
  • Alipay (HK)_SHOPLINE Payments
  • Google Pay
  • Apple Pay
  • Credit Card
  • PayPal
  • Bank Transfer
Customer Reviews
{{'product.product_review.no_review' | translate}}