Nvidia Launches New Generation of Server AI Chips

Nvidia has announced at the annual GTC 2022 conference several new chips and technologies designed to increase the speed of calculations of artificial intelligence algorithms.

The company has introduced the architecture for next-generation Hopper video accelerators and the H100 chip based on it, designed for machine learning tasks.

The novelty is produced on a 4-nm process technology and contains 80 billion transistors. It is the company’s first GPU to support the PCle Gen5 connectivity interface and use HBM3 to provide 3TB/s of memory bandwidth.

The company has said that the H100 is three times faster than its predecessor A100 at FP16, FP32 and FP64 and six times faster at 8-bit floating point. According to Nvidia:

“For training giant transformer models, the H100 was nine times more productive, training in just a few days instead of weeks.”