Intel's Larrabee Graphics To Take On Nvidia, ATI ... In 2010

Intel wants to get back into the discrete graphics game and on Monday revealed some details about how it plans to get there within the next couple of years with a family of products and development tools codenamed Larrabee.

The Santa Clara, Calif.-based chip giant will officially present a paper on "a many-core visual computing architecture" at the SIGGRAPH 2008 conference in Los Angeles on Aug. 12 but has already placed the paper online.

Intel on Monday said the first Larrabee-based product will be released "in 2009 or 2010." That unspecified device "will target the personal computer graphics market" but Intel promised applications for Larrabee that go beyond traditional graphics processing and pledged to build the infrastructure and ecosystem necessary for an ambitious plan to blend linear and parallel computing on entirely x86-based hardware platforms.

"It's shaping up to be an interesting product. We won't know where it scales in terms of performance until we have hardware in hand, but they've taken a number of steps to make it fit into the market," said Dean McCarron, principal analyst at Mercury Research.

id
unit-1659132512259
type
Sponsored post

According to the analyst, Intel's Larrabee presentation is lighter on more "esoteric" details previously emphasized by the chip maker and more focused on appealing to potential developers by promising support for existing graphics APIs like Direct X and OpenGL.

The term "ray tracing," for example, is noticeably downplayed in the paper and accompanying press statements from Intel. Ray tracing is a technique that "traces" the path of light through objects to produce a higher degree of photorealism than is possible with traditional raster-based graphics. The downside -- mainstream graphics processors lack the horsepower to perform such work efficiently in real time.

Intel had in the recent past highlighted the ray tracing capabilities of Larrabee, promising to produce a product that can perform such graphically intense calculations but at an acceptable computational cost. At the company's recent Research@Intel Day, Intel CTO Justin Rattner made ray tracing the central theme in his comments about Larrabee, but the SIGGRAPH paper doesn't mention the subject until more than half the way through.

For now, the focus is on Larrabee as a potential threat, albeit a distant one, to the discrete graphics products made by Nvidia and Advanced Micro Devices. Nvidia, headquartered in Santa Clara, Calif., is the market leader in discrete graphics cards for PCs and dominates the high end with its GeForce line of products. Sunnyvale, Calif.-based AMD's ATI division is second overall and enjoys the strongest sales of its Radeon products in the more price-conscious segment of the market.

Intel remains the market leader in shipments of graphics hardware, due to the huge numbers of motherboards it ships featuring integrated graphics chipsets. The overwhelming majority of computers sold worldwide come with graphics hardware that taps into the same available memory as the central processor rather than one or more discrete graphics cards with separate, dedicated graphics memory.

But the increasing demands on graphics hardware by modern PC user interfaces, rich media applications and mainstream gaming has helped Nvidia become a genuine semiconductor powerhouse, while Intel's main CPU rival AMD bought its own large footprint in the discrete graphics market with its 2006 acquisition of ATI.

Intel's strategy, some years in the making, has been to develop its own discrete graphics unit in-house rather than acquiring one, according to McCarron.

"It's very clear that some time ago, say two or three years ago -- and keep in mind that back then we were hearing rumors that Intel was looking at ATI or Nvidia and#91;as possible acquisitionsand#93; -- but back then the decision was clearly made to develop this expertise in house," the analyst said.

Next: Building The Larrabee Ecosystem While actual products based on Larrabee are a year or more away from being released, McCarron said Intel is wise to lay the groundwork for the ecosystem it plans to build around the new technology sooner rather than later.

"If they just dumped this out on the market when the processor was ready, it wouldn't go anywhere. They need the ecosystem and infrastructure in place," he said. The Larrabee paper highlights various programming models and software tools Intel says it is developing in partnership "with more than 400 universities, DARPA and companies such as Microsoft and HP."

The fact that Intel plans to build a discrete graphics architecture that is entirely x86-based is perhaps the key takeaway from Monday's news, though it's been known for some time. Nvidia and AMD, the two leading makers of discrete graphics for personal computers, use proprietary graphics-focused instruction sets for their products.

While it's no secret that Intel was planning an x86-based discrete graphics product, recent moves by the chip maker to expand the reach of its Intel Architecture (IA) into new product categories places Larrabee in the context of a larger story.

McCarron said the Larrabee announcement had "an echo of the old RISC versus CISC battle," referring to Intel's decades-long war with makers of non-IA microprocessors for dominance in the personal computer market. Intel won that battle -- and recently announced plans to renew its war on RISC-based chips in the embedded and consumer electronics markets.

The analyst also noted that Monday's Larrabee announcement reflects a good deal more than just Intel's intent to move into the discrete graphics market.

"That's not entirely all that this is about. This will be Intel's first re-entry into discrete graphics, but it's not really just discrete graphics. Like Nvidia has been doing, it's about the development of the parallel processor," McCarron said.

In recent years, both Nvidia and AMD have accelerated their development of products and programming tools for general-purpose computing on graphics processors, or GPGPU computing. These include hardware, such as Nvidia's Tesla and AMD's FireStream GPUs, and a C programming language called CUDA specifically designed by Nvidia for software developers to take advantage of the graphics processor's function as a low-cost "stream processor" which can be programmed to perform certain traditionally CPU-handled computational tasks simultaneously, or "in parallel," at a much more efficient rate than a central processor is capable of doing.

"You look at what's happening today on the high-end workstations. You have some dilution, where parallel processing is happening on the GPU and serial processing is happening on CPU," McCarron said.