Domino's Taps Nvidia T4 GPUs, DGX To Spice Up Pizza Delivery With AI

Domino's is tapping into Nvidia T4 GPUs and the chipmaker's DGX deep learning system to accelerate artificial intelligence tasks involving prediction and image classification so that the pizza delivery chain can provide a better experience for customers.


Domino's Pizza thinks it can improve the delivery experience with artificial intelligence, so the pizza delivery chain is betting big on Nvidia's data center GPUs to get the job done.

At the National Retail Federation's annual conference Monday, Nvidia announced that Domino's is tapping into a "bank" of servers running the chipmaker's T4 GPUs to accelerate AI inferencing for Domino's tasks that involve real-time predictions.

[Related: Ian Buck On 5 Big Bets Nvidia Is Making In 2020]

Sponsored post

Domino's is also using an Nvidia DGX deep learning system with eight Tesla V100 GPUs for training purposes. This includes training a pizza image classification model on more than 5,000 images. These images were sent in by customers in exchange for loyalty points that can be redeemed for a pizza as part of the delivery chain's Points for Pie program that launched last year.

"The data science team said this is a great AI application, so we built a model that classified pizza images. The response was overwhelmingly positive. We got a lot of press and massive redemptions, so people were using it," Zack Fragoso, a data science and AI manager at Domino's, said in a statement.

With Nvidia T4 servers, Domino's said it is improving the way it makes real-time predictions for a variety of tasks, including when orders will be ready. The company was able to boost the accuracy of its "load-time" model—which considers factors like the number of employees and managers working as well as the number and complexity of orders—from 75 percent to 95 percent, thanks in part to its use of Nvidia GPUs.

Domino's is also exploring the use of AI inference with Nvidia's T4 GPUs for computer vision applications that improve the experience for customers who choose carryout.

So far, the delivery chain has been able to complete tasks involving prediction 10 times faster on average with Nvidia GPUs compared to the company's previous IT infrastructure.

"Model latency is extremely important, so we are building out an inference stack using T4s to host our AI models in production. We’ve already seen pretty extreme improvements with latency down from 50 milliseconds to sub-10ms," Fragoso said.

In the company's most recent earnings call last November, Nvidia CEO Jensen Huang said that T4 GPUs were "really kicking into gear" as the chipmaker's inference chips surpassed Tesla V100 training GPUs in sales for the first time.

"This year we started to see the growth of inference to the point where we sold more T4 units for inference than we sold V100s, which we sell for training, and both were record highs," Huang said at the time.