Search
Homepage Rankings and Research Companies Channelcast Marketing Matters CRNtv Events WOTC Cisco Partner Summit Digital 2020 Lenovo Tech World Newsroom HPE Zone Masergy Zenith Partner Program Newsroom Dell Technologies Newsroom Fortinet Secure Network Hub Hitachi Vantara Digital Newsroom IBM Newsroom Juniper Newsroom The IoT Integrator Lenovo Channel-First NetApp Data Fabric Intel Tech Provider Zone

Nvidia's Ian Buck: A100 GPU Will 'Future-Proof' Data Centers For AI

'By having one infrastructure that can be both used for training at scale as well as inference for scale out at the same time, it not only protects the investment, but it makes it future-proof as things move around,' says Buck, Nvidia's head of accelerated computing, in an interview with CRN.

Back 1 ... 6   7   8   9  
photo

CEO Jensen Huang has said that there will be demand for AI throughout the entire data center market. When is that going to happen?

To understand that perspective, you should just think about what AI is capable of doing. At the highest level, AI can write software from data. It can look at a set of data and come up with a model that can predict an outcome, which is actually writing software. If you apply that template to the things data centers do for understanding and managing data, it can be applied in almost any use case where you need to understand data. Even cases where you want to optimize what data you're going to look at.

We see that the proliferation happening, certainly as users interact with the cloud, that is user data that needs to be understood. We could use it to transcribe this call. When we upload video or pictures. understand that content, who to share it with, understand who not to share it with — or should it be shared at all — understand its security and whether or not it's actually you or not you. These modalities, these use cases are not unique to YouTube or Twitter. They're what most data centers actually do with their own use cases in their own domain. The challenge is we just have to get this technology in the hands of people that can apply it to their particular domain, and that's been a big mission for Nvidia.

AI was first deployed at scale by the hyperscalers because they have the talent and capacity to figure it out for the first time. A lot of these use cases are well enough understood that they can be deployed and add value to the rest of the market, whether it be the financial world, the retail world or the telecommunications world, etc. That's one of the reasons why you see us inventing SDKs like Jarvis for conversational AI. Jarvis is a complete conversation AI SDK. It comes with a whole bunch of pre-trained models for speech recognition, pre-trained models for language understanding — after you've understood what was said, you can understand what it meant. For chatbot technology, so you can actually come up with an answer. And for text-to-speech so a computer can talk back to you in a human indistinguishable voice but generated by computer.

We actually make all those models available on NGC. What we have, that's not enough, because we're not going to know anything about financial trading information, so we provide you with retraining kits: how to retrain a BERT [model], so you can understand financial documents like a 10-K or earnings report. You can apply it to whatever output you want that makes sense for your use case. And then of course, we also make a software stack that's pre-configured and optimized to deploy on GPUs as well, so you can very quickly with one line of code take your training model and deploy it across a Kubernetes cluster. So that's us democratizing AI. We create these SDKs like Jarvis for conversational AI, like Merlin for recommender systems, and we open source them, we make them free on NGC, and we give you all the tools capable so that developers can apply them to their domain-specific use cases.

 
 
Back 1 ... 6   7   8   9  

sponsored resources