Hot Times Call For Cooler CPUs


The term "hot chips" used to mean processors that pushed the performance envelope. Today, when even the pokiest PCs clock in at more than 1.0 GHz, it mostly means notebooks that threaten to singe one's thighs and desktops outfitted with screaming cooling fans. Recently, this burning issue of discomforting
proportions has moved both Intel and AMD to take decisive action. The fallout will soon result in the biggest changes to microprocessor product lineups VARs will have at their disposal since the first Pentium was introduced in 1993.

I'm talking about the move to multicore chips, which will place two (or, in several years' time, four) complete CPUs on a single semiconductor die.

Intel dropped the first shoe in May, when it told attendees at its Spring Analysts meeting that all of its future microprocessor development (except for the upcoming Prescott) will be multicore. Processors for both desktops and servers will ship in volume in 2005. AMD weighed in a month later, tipping plans to release multicore Opterons and Athlons next year. (Let's note that IBM actually beat everyone to the multicore punch. It rolled out a dual-core Power 4 processor, for use in its RISC servers, in 2001.)

That shift isn't coming about by choice, but by necessity, however. It costs roughly $32 in bills-of-materials parts for a systems builder to cool an 80-watt (W) processor. (A power supply, voltage regulator and capacitor come in at $20 to $25. The heat sink and fan tack on an additional $10.)

When Intel found that its in-the-works Tejas processor would have drawn upward of 100 W, which would have forced the use of more expensive cooling solutions, it decided to pull the plug and move to multicore.

The benefits are obvious. A processor with dual 2.0-GHz cores can deliver performance not all that different from a single-core 3.5-GHz part. More important, the dual-core device holds down power dissipation to a figure closer to that of a standalone 2.0-GHz CPU than a 3.5-GHz chip. So processing throughput effectively doubles for not a whole lot more power.

Moving forward, power-dissipation specs will be just as important as clock speed in picking a processor. A key executive at AMD agrees.

"As we develop our plans around multicore, power is a major consideration in what we're doing," says Kevin Knox, AMD's director of worldwide enterprise business development.

Knox sees power as a front-line issue for customers, pointing to his recent trip to New York's Wall Street. "It was amazing how many people were talking to me about power," he says.

Concerns are three-pronged. "First, they want to lower their power costs," Knox says. "Second, data centers are becoming more crowded, and expansion in New York is pretty expensive. The final issue is these guys have racks of blade servers, and they can't populate the racks because they can't get enough power in there."

Putting a lid on server wattage could encourage Knox's customers to stuff more systems in those racks, which he says are often only 25 percent full.

An additional technological angle that will help AMD and Intel rein in power is a transition now under way in the semiconductor industry, where fabrication technology is moving from 130 nm to
90 nm. The numbers refer to the width of the lines etched on the chips; the tinier lines require less power to push electrons through. Intel has already shipped its first 90-nm processors, and AMD expects to do the same later this year.

In big-think terms, the move to multicore means that parallelism will become the dominant architectural construct of tomorrow's computers. In parallel setups, monolithic software applications are broken up into multiple instruction streams. If these streams can be run simultaneously--and they can be if there are no data dependencies (i.e., if one stream doesn't have to wait for the results of another)--then the overall execution speed of an app can be greatly increased.

Today, computers make use of what's called logical parallelism. Most PCs have only one processor. However, the operating system can turn that single resource into multiple, virtual processors. Indeed, several years ago Intel coined the term "hyperthreading" to describe applications where multiple threads of software can be run simultaneously on one processor. Both Windows and Linux have since been outfitted with support for hyperthreading.

Clearly, hyperthreading enables virtual parallelism. Just as obviously, if software can dole out instruction streams to multiple virtual processors, that code should be equally adept at routing those streams to multiple physical cores (a.k.a. multicore processors).

That's probably why at its recent analysts meeting, Intel said it sees hyperthreading as the first step toward multicore operation. In theory, at least, the transition should be smooth.

For now, as the summer heat continues to swell, think cool chips.