DC thermal management, power kit is getting easier to find and a lot more expensive

Expect to pay anywhere from 10-20 percent more, analysts tell El Reg

Months-long delays for critical datacenter infrastructure, including power and thermal management systems, have become the norm since the pandemic, but a fresh report from Dell'Oro Group shows that reality is giving way to more functional supply chains.

While the analyst group's latest datacenter physical infrastructure report showed revenues were up 18 percent during the first quarter, and some of this was due to larger shipment volumes, prices are also on the rise.

Even though customers may not have to wait as long to get their kit, they're likely to pay considerably more than this time last year. Depending on the component, customers can expect to pay anywhere from 10-20 percent more, Dell'Oro analyst Lucas Beran told The Register, adding that thermal management and cabinet power distribution equipment have seen some of the largest price hikes.

Still, the situation has improved considerably since last year, when customers could find themselves waiting anywhere from 12-18 months just to get their hands on the UPSes, PDUs, and racks necessary to support additional capacity. Currently Dell'Oro estimates lead times at six to 12 months. For reference, lead times need to drop to three to six months before they're back to pre-pandemic levels.

"Throughout 2023, datacenter physical infrastructure vendors are going to whittle down at their backlogs to get back to, more or less, historical norms," he said.

Long term, Beran notes that the trend toward higher TDP components and the hype surrounding generative AI are likely to have an impact on the market.

In particular, easier access to this kind of equipment could help datacenters cope with a new generation of watt-gobbling chips from Intel, AMD, and Nvidia. Today, CPUs can easily consume 400W under full load, up roughly 120W from last generation, and in addition to the challenge of getting all that power to the rack, operators also need to account for more demanding cooling requirements. 

The situation is even more challenging for customers with GPU clusters, which might pack four to eight 700W GPUs into a single chassis. For those training large language models, tens of thousands of GPUs may be required.

There are technologies available to handle the thermal output of these systems — rear-door heat exchangers, direct-to-chip liquid cooling, and immersion cooling — however, all of them require substantial facilities investments to deploy and operationalize.

Despite the AI hype, Beran doesn't expect these trends to directly impact revenues until 2024 or 2025 at the earliest. Even still, he remains optimistic about the future of datacenter physical infrastructure market, and predicts revenues to grow by as much as 12 percent in 2023. ®

Similar topics

TIP US OFF

Send us news


Other stories you might like