Supermicro, SuperServer, 4029GP-TRT, 6049GP-TRT, 20 GPU Server, 8 GPU Server, NVIDIA, Tesla T4, TensorRT, NVLink, AI Inference, AI Inferencing, Supermicro GPU, Turing Tensor Core Australia, Learn about Supermicro SuperServer 6049GP-TRT and 4029GP-TRT, which are perfect AI Inference-optimized GPU systems that are compatible with NVIDIA Tesla T4 GPU. It allows the use of system NVLink to operate the software TensorRT for optimal AI inferencing.
Supermicro, SuperServer, 4029GP-TRT, 6049GP-TRT, 20 GPU Server, 8 GPU Server, NVIDIA, Tesla T4, TensorRT, NVLink, AI Inference, AI Inferencing, Supermicro GPU, Turing Tensor Core Australia, Learn about Supermicro SuperServer 6049GP-TRT and 4029GP-TRT, which are perfect AI Inference-optimized GPU systems that are compatible with NVIDIA Tesla T4 GPU. It allows the use of system NVLink to operate the software TensorRT for optimal AI inferencing.
[COVID-19] A note to our customers. We will continue our operations and ensure your business continuity. We ensure safety of our customers, employees, and community above all else and have met the Health Department and WHO requirements in our decision.
Learn More >
Supermicro have created the perfect GPU system that is compatible with the pairing of NVIDIA that is Inference-optimized for your business. Supermicro SuperServer’s compatibility with NVIDIA’s GPU enables a new range of powerful and efficient server options that can be fully customised to meet your digital requirements and specifications.

Blog | AI Inference-optimized GPU Server with up to 20 NVIDIA Tesla T4 GPUs

December 3rd 2019

 

 

What's new
with Supermicro
GPU Servers

 

Use Case: 3D Rendering, Astrophysics/Chemistry, Cloud and Virtualization needs, Cloud computing, Compute Intensive Application, Dual Root System for balanced performance & higher CPU to GPU Communications.

 

Supports up to 20 Single Width GPU

 
Welcome to Law Services
Perfect Inference-Optimized GPU Server

Supermicro have created the perfect GPU system that is compatible with the pairing of NVIDIA that is Inference-optimized for your business. The SuperMicro SuperServer 4029GP-TRT boasts superior GPU performance per-Watt and per-Dollar that delivers high performance computing for Astrophysics, Cloud Computing and Virtualisation. Supporting 2nd Gen Intel Xeon Scalable processors, up to 6TB 3DS ECC DDR4-2933MHz memory and 8 PCI-E 3.0 x 16 slots that supports 8 double width GPU. Supermicro SuperServer’s compatibility with NVIDIA’s GPU enables a new range of powerful and efficient server options that can be fully customised to meet your digital requirements and specifications.

(Read More)

 

Supermicro SuperServer 6049GP-TRT

  • Dual socket support 2nd Gen Intel Xeon Scalable processors
  • Up to 6TB of 3DS ECC DDR4-2933MHz supports Intel Optane
  • 24x Hot-swap 3.5" drive bays
  • or 22x Hot-swap 3.5" drive bays + 2x U.2 NVMe 2.5" drives
  • 2x 10GBase-T LAN ports
  • support up to 20 single width GPU
Build Your Server

Supermicro SuperServer 4029GP-TRT

  • Dual socket support 2nd Gen Intel Xeon Scalable processors
  • Up to 6TB of 3DS ECC DDR4-2933MHz supports Intel Optane
  • Up to 24 Hot Swap 2.5” drive bays; 8x 2.5” SATA Drives
  • 2x 10GBase-T LAN ports via Intel C622
  • support up to 8 double width GPU
Build Your Server
 

Our latest case details

To stay ahead of the market, the compatibility of the Supermicro GPU servers with NVIDIA products such as the T4 GPU powered by Turing Tensor Cores, assists in optimizing the current most advanced AI Inference platform. With the use of a NVIDIA NV Link in the system, this has enabled us to implement eight GPUs in a single server which has resulted in accelerating diverse cloud workloads of high-performance computing that includes deep learning training and inference, machine learning, data analytics and graphics. These features assist businesses through creating new customer experiences, change how they interact with customer demands and is a cost-effective method of scaling AI-based products and services.

(Read More)

How to operate NVIDIA’s TensorRT high-performance deep learning inference platform

This exceptional system allows for the use of NVLink to optimally operate NVIDIA’s TensorRT high-performance deep learning inference platform. It performs at up to 40x higher throughput in real-time latency when compared to CPU-only inferences with only 60 percent of the power consumption. Additionally, a single NVIDIA Tesla P4 server can replace up to 11 commodity CPU Servers that operate deep learning inference applications and servers.

(Read More)

 
Share

Call Back

To talk with us please leave your phone, we will call you ASAP.