Nvidia Pushes Parallel Computing, Opens Up CUDA Programming Model

Most notably, the chipmaker announced a compiler source code enabling software developers to add new languages and architecture support to Nvidia’s CUDA parallel programming model. The new LLVM-based CUDA compiler code is included in the latest release of Nvidia’s CUDA Toolkit, version 4.1.

According to Nvidia, the new compiler source code "opens up" its CUDA parallel programming platform, allowing developers to more easily add GPU support for more programming languages. In other words, it facilitates the programming of parallel computing systems and, Nvidia said, will accelerate the path to exascale computing.

"We moved to a new open-source infrastructure called LLVM (lower-level virtual machine), which is a public compile infrastructure, and that has allowed us to publish the source code," explained Sumit Gupta, director of high performance computing products at Nvidia. "And the idea behind it is that a lot of the academics want to build support for new languages to target the GPU, or they want to add support using CUDA for different architectures, for example, AMD GPUs. So having the CUDA platform available in this manner enables people to do that."

As part of the CUDA Toolkit, version 4.1, Nvidia has also released a GPU code optimization tool to visually guide developers through the CUDA programming process. The new tool also identifies any potential code bottlenecks, saving developers from re-work in the long run.

id
unit-1659132512259
type
Sponsored post

In a separate effort to facilitate parallel computing, Nvidia recently launched its "2x in 4 Weeks. Guaranteed." program, encouraging programmers to use a directives-based model to double their application speed in one month. Directives enable parallel programmers to provide "hints" to the compiler, identifying which areas of code to accelerate, without requiring programmers to modify the underlying code itself.

This approach, Nvidia said, has already allowed some developers to accelerate their application speeds by up to five times -- in as little as one day of programming. Similarly to the compiler source code, directives reduce developer headaches when it comes to parallel programming.

"I spend a lot of time with my partners, with my value added resellers, and with my OEMs. The number one thing they always tell us is, 'we have so many users out there that have heard about the value of GPUs and are looking for an easy way to take advantage of them,'" Gupta told CRN. "One of the biggest pain points in the community is that these users don’t know how to take advantage of any architecture, whether its multi-core CPUs or GPUs. So this new GPU compiler that uses directives is really appealing to a broader audience."

Nvidia’s new compiler source code and "2x in 4 Weeks. Guaranteed." program are all part of its push toward exascale computing and the wider use of GPUs for scientific research, the chipmaker said.