The right GPU model for every workload

Comparison of the NVIDIA® Hopper™ architecture with Ampere™ and Ada-Lovelace™ architecture

NVIDIA's GPU models all offer powerful options. The right choice depends heavily on the specific workload requirements of the project.

The models in comparison

NVIDIA H100 NVL & H100 HGX (Hopper architecture)

For the inference of large language models with up to 175B parameters, NVIDIA offers the H100 NVL GPU, an extended, PCIe-based H100 GPU with NVLink Bridge. The H100 NVL is optimized for AI testing, training and inference and especially for deep learning tasks and large language models.

To efficiently process tasks with very high complexity, NVIDIA HGX H100 combines eight H100 GPUs in the form of integrated baseboards. The eight GPU HGX H100 provides fully networked point-to-point NVLink connections between the GPUs. By leveraging the power of the H100 multi-precision tensor cores, an 8x HGX H100 provides over 32 petaFLOPS of FP8 deep learning computing power.

Recommended workloads:

  • NVIDIA H100 NVL
    • Models smaller than 175B Parameters
    • Inference
    • data analysis
  • NVIDIA H100 HGX
    • Models over 175B parameters
    • inference
    • High Performance Computing
    • Deep Learning Training

NVIDIA A100 PCIe (Ampere Architecture)

The NVIDIA A100 Tensor Core GPU is designed for compute-intensive AI, HPC and data analytics applications. It offers accelerated performance for AI-driven tasks. It is particularly suitable for environments where multiple applications need to run simultaneously. 
 

Intended use

  • Training
  • Inference
  • Data analysis

NVIDIA L40S (Ada-Lovelace architecture)

The NVIDIA L40S GPU, based on the Ada Lovelace architecture, is the most powerful general-purpose GPU for data centers, delivering breakthrough multi-workload acceleration for Large Language Models (LLM), inference and training, graphics and video applications. As the leading platform for multimodal generative AI, the L40S GPU provides end-to-end acceleration for inference, training, graphics and video workflows to support the next generation of AI-enabled audio, speech, 2D, video and 3D applications.

Intended use

  • Generative AI
  • Training
  • Learning
  • Inference
  • Rendering and 3D graphics

Technical data at a glance

1 All Tensor Core figures with sparsity. Without sparsity it is the ½ value.

Source: NVIDIA

HGX H100
H100 NVL
A100
L40S
STACKIT Machine Types
n3.104d.g8

Machine type with 8x HGX H100 GPUs
n3.14d.g1
n3.28d.g2
n3.56d.g4

Machine types with 1 up to 4
H100 NVL GPUs
n1.14d.g1
n1.28d.g2
n1.56d.g4

Machine types with 1 up to 4 A100 PCIe GPUs.
n2.14d.g1
n2.28d.g2
n2.56d.g4

Machine types with 1 up to 4 L40s GPUs
FP64 TC | FP32 TFLOPS1
67 | 67
60 | 60
19.5 | 19.5
NA | 91.6
TF32 TC | FP16 TC TFLOPS1
989 | 1979
835 | 1671
312 | 624
366 | 733
FP8 TC | INT8 TC TFLOPS/TOPS1
3958 | 3958
3341 | 3341
NA | 1248
1466 | 1466
GPU Memory
80GB HBM3
94GB HBM3
80GB HBM2e
48GB GDDR6
Media Acceleration
7 JPEG Decoder
7 Video Decoder
7 JPEG Decoder
7 Video Decoder
1 JPEG Decoder
5 Video Decoder
3 Video Encoder
3 Video Decoder
4 JPEG Decoder