H100 NVL

$29,999.99

VISION COMPUTERS, INC. PNY RTX H100 NVL - 94GB HBM3-350-400W - PNY Bulk Packaging and Accessories

Quantity:
Add To Cart

VISION COMPUTERS, INC. PNY RTX H100 NVL - 94GB HBM3-350-400W - PNY Bulk Packaging and Accessories

VISION COMPUTERS, INC. PNY RTX H100 NVL - 94GB HBM3-350-400W - PNY Bulk Packaging and Accessories

About this item

  • The H100 NVL graphics card is designed to scale the support of large language models, such as GPT3-175B, in mainstream PCIe-based server systems, providing up to 12X the throughput performance of HGX A100 systems when configured with 8 units.

  • Equipped with advanced features, including 94GB of high-speed HBM3 memory, NVLink connectivity for enhanced inter-GPU communication, and an impressive memory bandwidth of 3938 GB/sec, the H100 NVL is built for high-performance AI inference tasks.

  • The card showcases a robust performance spectrum across various compute types: 68 TFLOPS for FP64, 134 TFLOPS for both FP64 Tensor Core and FP32, escalating up to 7916 TFLOPS/TOPS for FP8 and INT8 Tensor Core operations, all benefiting from sparsity optimizations.

  • It enables standard mainstream servers to deliver high-performance capabilities for generative AI inference, simplifying the deployment process for partners and solution providers with fast time to market and ease of scalability.

  • The H100 NVL's power efficiency is optimized with a configurable maximum power consumption ranging between 2x 350-400W, supporting extensive computational tasks without excessive power usage.