Hot Times Call For Cooler CPUs

Today, when even the pokiest PCs clock in at more than 1.0 GHz, it mostly means notebooks that threaten to singe one's thighs and desktops outfitted with screaming cooling fans.

Recently, this burning issue of discomforting proportions has moved both Intel and AMD to take decisive action. The fallout will soon result in the biggest changes to microprocessor product lineups VARs will have at their disposal since the first Pentium was introduced in 1993.

I'm talking about the move to multicore chips, which place two (or, in several years' time, four) complete CPUs on a single semiconductor die.

Intel dropped the first shoe in May, when it told attendees at its Spring Analysts meeting that all of its future microprocessor development (except for the upcoming Prescott) will be multicore. Processors for both desktops and servers will ship in volume in 2005. AMD weighed in a month later, tipping plans to release multicore Opterons and Athlons next year.

id
unit-1659132512259
type
Sponsored post

(I should note that IBM actually beat everyone to the multicore punch. It rolled out a dual-core Power4 processor, for use in its RISC servers, in 2001.)

However, this shift isn't coming about by choice, but by necessity. It costs about $32 in bills-of-materials parts for a systems builder to cool an 80-watt (W) processor. (A power supply, voltage regulator and capacitor come in at $20 to $25. The heat sink and fan add another $10.)

When Intel found that its in-the-works Tejas processor would have drawn upward of 100 W, which would have forced the use of more expensive cooling solutions, it decided to pull the plug and move to multicore.

The benefits are obvious. A processor with dual 2.0-GHz cores can deliver performance not all that different from a single-core 3.5-GHz part. More important, the dual-core device holds down power dissipation to a figure closer to that of a standalone 2.0-GHz CPU than a 3.5-GHz chip. So processing throughput effectively doubles for not a whole lot more power.

Moving forward, I believe power-dissipation specs will be just as important as clock speed in picking a processor.

A key executive at AMD agrees. "As we develop our plans around multicore, power is a major consideration in what we're doing," Kevin Knox, AMD's director of worldwide enterprise business development, told me in an interview.

Knox sees power as a front-line issue for customers, pointing to his recent trip to New York's Wall Street. "It was amazing how many people were talking to me about power," he said.

Concerns are three-pronged. "First, they want to lower their power costs," Knox said. "Second, data centers are becoming more crowded, and expansion in New York is pretty expensive. The final issue is, these guys have racks of blade servers, and they can't populate the racks, because they can't get enough power in there."

Putting a lid on server wattage could encourage Knox's customers to stuff more systems in those racks, which he says are often only 25-percent full.

An additional technological angle that will help AMD and Intel rein in power is a transition now under way in the semiconductor industry. There, fabrication technology is moving from 130 nm to 90 nm. The numbers refer to the width of the lines etched on the chips; the tinier lines require less power to push electrons through.

Intel has already shipped its first 90-nm processors, and AMD expects to do the same later this year.

But before multicore can become ubiquitous, there's one missing piece of the technological puzzle that has to be addressed. That would be operating-system software, which must be capable of finding parallelism in application code. Such OSes can thus divvy up different tasks, or threads, to run simultaneously on the different cores. Linux and Windows are in various stages of support for hyperthreading on different 64-bit and multicore architectures. More about the software angle in future columns.

For now, as the summer heats up, think cool chips.