Supermicro Solidifies Position as a Leader in Complete Rack Scale Liquid Cooling Solutions -- Currently Shipping Over 100,000 GPUs Per Quarter

In This Article:

Designed for Data Center Scale Reduction in TCO and Enabling Large AI Clusters to Perform with a Lower Energy Budget, Liquid Cooling Solution Handles the Highest Wattage Servers Containing the Latest GPUs and CPUs, Resulting in Lower Costs for AI Factories – Over 2,000 Liquid Cooled Racks Delivered Since June 2024

SAN JOSE, Calif., Oct. 7, 2024 /PRNewswire/ -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing a complete liquid cooling solution that includes powerful Coolant Distribution Units (CDUs), cold plates, Coolant Distribution Manifolds (CDMs), cooling towers and end to end management software. This complete solution reduces ongoing power costs and Day 0 hardware acquisition and data center cooling infrastructure costs. The entire end-to-end data center scale liquid cooling solution is available directly from Supermicro.

"Supermicro continues to innovate, delivering full data center plug-and-play rack scale liquid cooling solutions," said Charles Liang, CEO and president of Supermicro. "Our complete liquid cooling solutions, including SuperCloud Composer for the entire life-cycle management of all components, are now cooling massive, state-of-the-art AI factories, reducing costs and improving performance. The combination of Supermicro deployment experience and delivering innovative technology is resulting in data center operators coming to Supermicro to meet their technical and financial goals for both the construction of greenfield sites and the modernization of existing data centers. Since Supermicro supplies all the components, the time to deployment and online are measured in weeks, not months."

Learn more about Supermicro liquid cooling solutions at www.supermicro.com/liquidcooling

Many organizations require the highest-performing GPUs and CPUs to remain competitive and need these servers to run constantly. Supermicro's ultra-dense server with dual top-bin CPUs and 8 NVIDIA HGX GPUs in just 4U with liquid cooling is the ultimate AI server needed in AI factories. When installed in a rack, this server quadruples the computing density, allowing organizations to run larger training models with a smaller data center footprint.

Supermicro recently deployed more than 100,000 GPUs with liquid cooling solution (DLC) for some of the largest AI factories ever built, as well as other CSPs. With each server approaching 12kW of power needed for AI and HPC workloads, liquid cooling is a more efficient choice to maintain the desired operating temperature for each GPU and CPU. A single AI rack now generates over 100kW of heat, which needs to be efficiently removed from the data center. Datacenter-scale liquid cooling significantly reduces the power demand for a given cluster size. Up to 40% power reduction allows you to deploy more AI servers in a fixed power envelope to increase computing power and decrease LLM time to train, which are critical for these large CSPs and AI factories.