Skip to content
Sign In Subscribe
An off angle image of computer chips lined up on a grid
Photo by Laura Ockel / Unsplash

Last week, the word petaflop was dropped multiple times during NVIDIA founder and CEO Jensen Huang’s GTC keynote. According to Gemini, a petaflop is “a unit of measurement for a computer's processing speed. It stands for quadrillion (one thousand trillion, written as 1,000,000,000,000,000) floating-point operations per second.” The context in which it was used by Huang was the introduction of their new Blackwell GPU (Graphics Processing Unit), which offers up to 20 petaflops of processing power. These chips can be deployed in the tens of thousands in the NVIDIA architecture.  

Comprehending these numbers is a feat of intellectual contortionism out of the grasp of most, but consider this. These super-processors are capable of dramatically accelerating the training of Large Language Models (LLMs) and creating exponentially better generative AI applications that will make those available today appear infantile by comparison. What does “exponentially better” mean, exactly? Speed, availability, and security will be table stakes. Improvements in these areas will likely be identified and monitored on AI by AI. 

This post is for subscribers only


Already have an account? Sign In