Nvidia recently introduced the next-generation DGX supercomputing platform called DGX GH200 at the ongoing Computex 2023 event. According to Nvidia, the DGX GH200 artificial intelligence supercomputing platform will help customers create the same success story as ChatGPT.
Built by pairing 256 Grace Hopper Superchips on 144TB of shared memory, the DGX GH200 can handle today’s most performance-intensive AI training tasks.
The power of each Grace Hopper super chip is made up of a 72-core Grace ARM CPU, a Hopper GPU, 96GB HBM3 memory and 512GB LPDDR5X memory on the same chip with a total of 200 billion transistors.
The DGX A100 system, the artificial intelligence computing platform that made OpenAI’s ChatGPT successful, can only connect eight A100 GPUs together into a single processor with 320GB of shared memory.
To link 256 Grace Hopper chips, the DGX GH200 uses the new NVLink Switch System, allowing them to act as a giant GPU with outstanding processing power – with 144TB of shared memory, 500 times more compared to the DGX A100 system (by ChatGPT).
Nvidia said that the entire DGX GH200 system uses up to 240 km of high-speed fiber optic cable, weighs more than 18 tons but operates as a single GPU.
The supercomputing system delivers “artificial intelligence performance” to 1 exaflop – roughly the same as the Frontier supercomputer at Oak Ridge National Laboratory in Tennessee, which has performance scores approaching 1.2 exaflops.