Nvidia Superstar Rev Lebaredian On Why Schneider Electric Is ‘Essential’ And The ‘Next Step’ In The AI Journey

‘Within our ecosystem and the new ecosystem we are creating around AI Factories, Schneider Electric is essential,' said Lebaredian. ‘There is no way we can do this without your participation and your partnership. For where we need to go, the current technologies that we have aren’t sufficient.’

Nvidia Vice President of Omniverse and Simulation Technology Rev Lebaredian said Schneider Electric is “essential” as the GPU powerhouse continues to push the AI technology envelope.

“Within our ecosystem and the new ecosystem we are creating around AI Factories, Schneider Electric is essential,” said Lebaredian (pictured left), a 24-year Nvidia veteran renowned for his groundbreaking work in rendering and AI and robotic simulation applications. “There is no way we can do this without your participation and your partnership. For where we need to go, the current technologies that we have aren’t sufficient. The only way we are going to be able to develop everything we need is to work together, us working together, but not just us. We need a large ecosystem.”

Lebaredian made the comments in a fireside chat at Schneider Electric’s Innovation Summit North America with Schneider Electric Executive Vice President of Secure Power and Data Centers Division and Global Services Business Pankaj Sharma (pictured right).

Schneider Electric’s energy, power and cooling prowess has been key in ensuring customers can run the massive Nvidia GPU AI-powered systems.

“The current generation we have, the Blackwell generation of GPUs, we are at about 130 to 140 kilowatts per rack, and we are looking at going to a megawatt per rack,” said Lebaredian. “Powering it is one part of the problem but removing all of the wasted heat is the other part. This thing will catch on fire or melt down otherwise. So we have to switch our cooling infrastructure from air cooling to liquid cooling and do all these other crazy things. But I think that’s super exciting. We now have this great engineering challenge with power engineering as well as plumbing.”

Lebaredian said the Schneider Electric partnership is key as Nvidia takes its AI prowess into the physical world. “That is why we have so much work together with Schneider Electric,” he said. “Nvidia is a computing company. We deal with the digital world, but we need to connect into the physical world in order to really bring the value of this technology. This is a key partnership for that.”

As part of the discussion, Sharma announced that Schneider Electric would be joining the Alliance for OpenUSD to foster collaboration for Universal Scene Depiction technology, which has been critical in AI simulations.

Sharma said it is important to be part of OpenUSD because of the role Schneider Electric plays in the physical world. “It is important for the ecosystem because in some way we play a role in enabling all that is in the physical world,” he said. “But the beauty and the comfort is that the physical world is literally getting trained in the virtual world before it is coming into the physical world. … For us, being part of this alliance is absolutely critical.”

Lebaredian, for his part, said simulation is a core part of how Nvidia designs and builds its chips. “We don’t know any other way to create our chips, which have billions and billions of transistors,” he said. “The number of things that could go wrong in it are so great that it is impossible for us to actually design such a chip and ensure that it is actually going to work when we get it back from our Fab without doing insane amounts of simulation.”

Here is more of what Lebaredian had to say in his conversation with Sharma.

A 30-Year ‘Overnight’ Success Story

Most of you now have heard about Nvidia, and you probably even know how to pronounce it correctly. This is a recent innovation. For most of the time I have been at Nvidia, most people didn’t know about it unless you were either into video games or hard-core computer science and hard-core computing.

But since the ChatGPT moment everybody now knows about us. It seems like we just popped up overnight. That we maybe got lucky. But it was about 30 years of overnight. It took us a long time to get to the point of where the world could discover what we were doing.

From the very inception of the company the idea behind it—the mission—was to create a place where the very best of the world’s computer scientists come to do their life’s work. That’s the core mission. It is not for a specific product or a specific technology, but just a place where computer scientists come to do the very best work.

The first problem we chose to tackle, the goal was to build special computers that solve problems that are near-impossible. And the first problem we chose was computer graphics. It is what’s called rendering. It is like a physics simulation where we simulate the physics of how light interacts with matter in order to produce those images. We decided to do that for video games because that was a potentially large enough market to sustain the R&D that we could contribute.

The idea was this was the type of physics simulation that was such a large computing problem no normal computers could do that. And if we could go solve this well enough we could then extend that same computing to other types of simulations, to other large computing problems. That’s what we did. We made our GPUs over time more programmable, and we expanded from doing rendering to doing other kind of physics simulations: computational fluid dynamics, molecular dynamics, seismic analysis, etc.

We waited for many years for a killer app to show up on this computing platform we were building, which is essentially supercomputers that could fit under your desk or fit inside your laptop. It took longer than we thought.

We introduced a platform called CUDA, which is our general-purpose programming interface for our GPUs in 2006. It took about seven years before the moment happened where finally somebody figured out what to do with all this computing power.

A Fundamentally Different Approach To Computing

What we understood early on was that this was much more than solving a particular class of algorithms, a particular class of problems. It was a whole new form of computing. It was a whole new form of software development. Up until that moment in time, the best algorithms, the best software was all developed one way: You just get a really smart human who knows how to program a computer, they imagine an algorithm, transcribe it into a computer as code, compile and you are done. You can do it on a relatively small computer.

But what we did here was fundamentally different. Instead of writing the function directly, devising the algorithm, we gave a supercomputer a large number of examples of what we would like the software to do. We gave it examples of images along with their labels like flashcards, saying, ‘This is a cat, this is a dog, this is a tractor, this is an airplane.’ And we did that enough times and if you do it enough times it wrote the software for us. The key ingredients to that were essentially enough examples. You have to have the data behind this to feed into the machine and a very large computer.

So we came to the conclusion that this was going to change everything. There is a large set of computing problems that nobody would even have dared try to go solve in our lifetimes. And now those have been unlocked. All we need is really large computers, lots and lots of them, and we need a large amount of data. So that is where we are today.

The Power Requirements For Nvidia AI Factories

Every time you make query in ChatGPT or Claude there is an AI factory somewhere that is taking your prompt and all the data you put in as input and tokens are coming out, maximizing the number of tokens you get per unit of energy, per dollar, the uptime of these, minimizing the latency from when you make the prompt until when it comes out. These are all the kinds of optimizations and speeds and feeds you would do with a factory.

You can see that they are actually quite different when you look at the designs of them, particularly from a power perspective. With the traditional data centers you have 10, maybe 15 or 20 kilowatts per rack, more than sufficient for storing and moving data around. But with an [AI] factory you want to maximize the density as much as possible. The less distance, the ones and zeros those electrons have to travel, the more efficient this machine runs. So we try to pack as much computing as possible into this dense area. But that creates all of these other challenges in terms of powering these data centers.

The current generation we have, the Blackwell generation of GPUs, we are at about 130 to 140 kilowatts per rack, and we are looking at going to a megawatt per rack. Powering it is one part of the problem but removing all of the wasted heat is the other part. This thing will catch on fire or melt down otherwise. So we have to switch our cooling infrastructure from air cooling to liquid cooling and do all these other crazy things. But I think that’s super exciting. We now have this great engineering challenge with power engineering as well as plumbing. We have to start really figuring out how to maximize this new domain to produce something that is of such great value. Everybody here can participate in this new industry that is forming.

The Nvidia Culture: Be Courageous Enough To Fail

Our mission to this day is we want the world’s best computer scientists to come here to do their life’s work and we really, really mean it. In order to do that, sometimes you have to make some trade-offs and sacrifices that aren’t maybe obvious up front. Often we find ourselves in a situation where there is an existing market with computing and technology where potentially we could enter that market and do something in that market that is faster and cheaper than the others that are in there and go compete. It would make financial sense to do that. But that would violate our mission because if we are going to have the world’s best computer scientists do their life’s work, they are not going to stay at our company if all they are doing is rehashing what others have done and doing it slightly better or cheaper. We have to continually find new markets, create new markets in order to create the conditions by which our engineers, our researchers can innovate.

I just saw this fireside chat [Nvidia CEO] Jensen [Huang] was doing when he accepted the [Professor] Stephen Hawking [Fellowship Award]. I recommend all of you go watch that. Somebody asked him a question, and he said when he started the company, he read all the [management] books with common ideas on how you should manage people. There was this idea of ranking all your employees and you should fire your bottom 5 percent yearly or regularly, and he said over time he figured out all of that just doesn’t work. We don’t do any of those things.

One of the things you need when you are innovating is you need your people to be courageous enough to go try things that are likely to fail. If they have a consistent fear they are going to get fired because they are not performing, because the thing they took a risk on is too risky and it is going to look like failure, [they won’t take those risks required to innovate]. If you don’t create a safe space for them to fail, then they are not going to do that. So you have to be committed to that as well. Jensen famously announced that we don’t fire people—we torture everyone so they leave, but we don’t actually fire them.

The Next Step In The AI Journey And Schneider Electric

Largely anybody that is using AI here today is using it to make sense of the information that you have on the internet, make sense of your documents, make sense of things that are already digital, which is just information. But the next step is going to be taking that same capability and applying it to the physical world.

That is why we have so much work together with Schneider Electric. Nvidia is a computing company. We deal with the digital world, but we need to connect into the physical world in order to really bring the value of this technology. This is a key partnership for that.

Pushing The Envelope With The Nvidia-Schneider Electric Partnership

I think our collaboration is already starting to bear fruit. Within our ecosystem and the new ecosystem we are creating around AI Factories, Schneider Electric is essential. There is no way we can do this without your participation and your partnership.

For where we need to go, the current technologies that we have aren’t sufficient. The only way we are going to be able to develop everything we need is to work together, us working together, but not just us. We need a large ecosystem.

Fundamentally we at Nvidia reject this idea that the technology markets are zero sum and that it is winner takes all. The very nature of technology and innovation is that you are creating something new. All of us can contribute something new and add to the world. We don’t have to keep taking things from each other. There is so much work that needs to get done. So many things we haven’t done yet. So if we can just align on who is doing what together we can create something much greater than otherwise would have been possible.

At Nvidia it’s kind of funny. We actually don’t really have customers directly. We don’t sell anything directly. There nobody that can come and buy something from Nvidia because everything we do goes to market through our partners.

We built all this core technology. We are likely the purest technology company in existence because our product really is just raw technology. By itself it is not useful. We have to go take our technology, bring it to our partners then embed it inside their products and their technology and then they take it out to market to end customers.

Some companies really work toward a philosophy to be very vertical, to do everything end to end and own everything in the supply chain so they can own a space completely. The way we look at it, we would just rather have our tiny part of every industry in every place and others can come along with us, build on top of us, take our technology, embed it into theirs and then go serve more and more and more markets. Schneider Electric is a key part of this for us in this space and not just in terms of going to market but also we are a customer of your products. We co-develop things together. There are multiple touchpoints in which we can partner and work together.

The very nature of technology innovation is creating something new. If we were not innovating or creating anymore, then it is just a land grab—either you are trying to take the same stuff that has been created and take your piece of it. But the very nature of what we are doing is always about creating new things. There is always more space.

The Agentic AI Revolution

We are now in the agentic AI phase where we’re taking these generative models and we are giving them access, a way to sense their environment. The environment for these models is essentially the internet or your computing systems. They can query things there. We are giving them the ability to take action. So an agent, like a coding agent, could look at your software code base. You can ask it to do something with it and then it could go modify the code and go execute on it. Or an agent could help you book your flights. It could go query all the flights there. It has access to that database and then it could go buy the tickets or book the tickets on your behalf. That is the era we are in. This is a real exciting one. We are not finished with it.

The ’Next’ Agent Era

We are now starting to kick off the next era of where we are taking these agents. We are giving them physical embodiments. We are taking them out of the digital world, the purely digital world, and giving them the ability to sense the physical world through sensors like cameras or LiDARs [Light Detection and Ranging], any sensor. They can then make decisions inside the real world and act in the real world. One such physical AI agent is an autonomous vehicle driver.

There are Waymos and others out there. That is essentially a physical AI agent that is doing a driving task. We are starting to create other types of emodiments: humanoids. We already have robotic arms. Those are showing up in factories and warehouse and supply chains and hospitals.

On Schneider Electric Joining The Alliance For OpenUSD To Drive Open Standards For 3-D Content

A few years ago we finally gathered enough companies to band together to create an Alliance of OpenUSD. The founding members were Pixar, of course, Adobe, Autodesk, Apple and us. Since then, dozens have joined. I heard recently that Schneider Electric might be joining as well. [In response Sharma confirmed that Schneider Electric is joining OpenUSD].

From the very beginning, simulation has been a core part of what Nvidia does in order to build and design the complex machines that we build. We don’t know any other way to create our chips, which have billions and billions of transistors. The number of things that could go wrong in it are so great that it is impossible for us to actually design such a chip and ensure that it is actually going to work when we get it back from our Fab without doing insane amounts of simulation.

From the time I’ve been at Nvidia, we have always had this huge supercomputer running 24x7 that puts all of our chip engineers on guardrails. Even if an engineer wants to sabotage our chip, they can’t because we are continuously testing every single design change to our chip in the supercomputer. So that by the time we tape it out and we send it over to TSMC for them to Fab it, we are certain that when it comes back it is not only going to work it is going to perform exactly how we simulated. It’s been the case for every chip we taped out since I have been there—it comes back and it works the first time.

Now with the complexity of the things that we are building, it is not just the chip. We have to design the systems, the rack, the computer. It is the whole AI Factory. All of this has to be co-designed together. We have no chance of being able to create these things if we don’t simulate it all holistically and co-design it all together.

The complexity of all of the things we are building in the world are now approaching the complexity of our chips. You can’t build a factory or a warehouse with the software and automation and the robots and everything else in there and make sure it is going to work right when you build it without simulating it in the design process first. That is the only way we are going to be able to achieve it. Otherwise we are going to build a whole bunch of things that don’t work, and we are going to waste a lot of material, energy and time rebuilding them in the real world, trying to modify and change them in the real world. And likely these things will just fail and we’ll hit a ceiling.

On The Nvidia And Schneider Electric Collaboration

Our collaboration has been great. In the keynote I heard that basically this [electrical] industry hasn’t changed much for 70 years. The last time we had a major rehab of building power, building the grid and all of the power infrastructure was 70 years ago, and we have reached the limits of that.

I would imagine that this industry no longer has the same muscles, the muscle memory to move quickly and iterate quickly on new ideas and new techniques. I think this is something that we have to consciously change. We are at a critical point here. This industry has to align with how we work in the computing and tech industry and try to work at our pace by moving quickly and failing quickly.