nVidia - expect 120Watt GPUs soon!

Nagorak said:
Intel has proclaimed that 100W of heat is the maximum for processors. Maybe they're conservative, but honestly you can't go that much higher than that. I guess graphics cards have a large die, so there's more surface to cool, but even so a 120W VPU is just insane... Nvidia chips have always run a little on the hot side, but now it seems like they're getting carried away. I really don't see this working out so well...

Well, the 3 ghz P4 is putting out about 75 watts right now. The forthcoming 3.2ghz P4 will be approaching 80 watts. Now consider that gpu's are growing in transistor count and complexity far faster than general purpose cpus and the 120W figure, while alarming, isnt all that unrealistic. Cooling is going to continue to be a concern for 3d chips in the future just like cooling IS a concern for cpu's today.
 
Nagorak said:
Intel has proclaimed that 100W of heat is the maximum for processors. Maybe they're conservative, but honestly you can't go that much higher than that. I guess graphics cards have a large die, so there's more surface to cool, but even so a 120W VPU is just insane... Nvidia chips have always run a little on the hot side, but now it seems like they're getting carried away. I really don't see this working out so well...

For desktops, maybe. Many server processors have TDP well over 100W. For example, Itanium 2 has 130W TDP.
 
"putting gfx cards into their own small external cases"......heh, it would give me something else to start modding....<click sig below>....I love that idea!
 
I can't say this is a suprise to me. GPUs have lots of transistors, equal to and soon to exceed the counts in the largest CPUs. Add to that that the bulk of the transistors are active much of the time (CPUs are somewhat different in this respect, aren't they?). Add the fact that ATI and NVIDIA fan-boys want "clock-speed and loadsa pipes baby" and you end up with stonking power requirements.

Here was me contemplating buying a quiet PC case and quiet PSU... :(
 
pcchen said:
For desktops, maybe. Many server processors have TDP well over 100W. For example, Itanium 2 has 130W TDP.

So what? You realize that servers are stored in huge cabinets, right? They're also generally not around anyone who is going to be annoyed by the noise... Obviously it's "technically possible" to have hotter CPU/GPU, the question is whether it's practical. It's also "technically possible" to have a CPU the heat of a nuclear reactor in your case, but that doesn't mean it would be a smart idea...

Johnny Rotten said:
Well, the 3 ghz P4 is putting out about 75 watts right now. The forthcoming 3.2ghz P4 will be approaching 80 watts. Now consider that gpu's are growing in transistor count and complexity far faster than general purpose cpus and the 120W figure, while alarming, isnt all that unrealistic. Cooling is going to continue to be a concern for 3d chips in the future just like cooling IS a concern for cpu's today.

Where did I say it was unrealistic? I said it was a bad idea. Eventually we're going to hit a brick wall in terms of cooling, and it's probably going to be sooner than later. The GFFX 5800 Ultra isn't nearly 120W and it proved to be a disaster. I really don't think 120W GPUs are going to be practical, it's just not going to work out. 120W is hotter than many processors, and look at the massive heatsinks we use on those! We're not talking about giving up 2 PCI slots, we're talking about giving up 4 or 5! And that's not taking into account how that monster is going to be able to remain attached while hanging upside down on the graphics board.

The bottomline is ATI and Nvidia are going to have to come up with a smarter approach than "clock speed, clock speed, clock speed" and "more pipes", be that deferred rendering or whatever.
 
Nagorak said:
pcchen said:
For desktops, maybe. Many server processors have TDP well over 100W. For example, Itanium 2 has 130W TDP.

So what? You realize that servers are stored in huge cabinets, right?
They're also generally not around anyone who is going to be annoyed by the noise... Obviously it's "technically possible" to have hotter CPU/GPU, the question is whether it's practical. It's also "technically possible" to have a CPU the heat of a nuclear reactor, but that doesn't mean it would be a smart idea...


Servers don't always run in cabinets. First of all, today, people are squeezing servers into 1U, 1/2 U and 1/4U form factors, much smaller than your desktop. Secondly, not every business has a data center or co-location. Many small businesses run their servers in a normal office suite. Itanium and Opteron will eventually find their way into desktops (once they come down from $4000 per chip)

Secondly, the Itanium has about the same # of transistors as today's GPUs. GPUs are not hot simply because of the clock speed, but also because they are massively parallel. One day, there will be a 1 billion transistor GPU (as well as CPU), and it's going to be hot.

I frankly do not care if it is hot or loud. I pay a premium for power and progress. If you don't want a Dragster, buy a Civic. The top of the line future GPUs are going to put out heat and suck power like crazy. If you don't like it, but a "cut down" version tailored for heat, power, and sound. But just because some of us want muscle cars, and you want a quiet luxury car, don't tell me to trade power for comfort.

I am personally in favor of the graphics companies packing the most amount of transistors they can physically get into a process at the highest clock rate possible, with the most amount of RAM possible, at the highest clock rate and bus width. I don't want them to waste silicon trying to cut power consumption or heat. Leave that to the "mobile" or "mainstream" versions of the chips.

Hell, I would love it if water cooling was a standard feature sold by OEMs for hardcore users, rather than a hack that you have to mod yourself.

In other words, Nvidia and ATI can create quiet, low heat, low power versions for average users, but when it comes to speed, I want them to *spare no expense*
 
What bothers me with ever increasing power requirements of GPUs (and for that matter, CPUs) is that it goes against other major trends in computing.

Basically, in PC space the high end has had the advantage that it has been the technology driver for the low end - the cost of development is amortized over a large volume of consumers. At the moment the larger trend is from desktops towards portable computing in various forms. On the desktop, anecdotal evidence from trade shows and other fora seems to be a trend towards more practical computers, small/quiet/energy efficient. LCD panels are a prime example, taking over largely because they fit into this trend, in spite of their greater cost and other problems.

Designing high-end GPUs with progressively higher energy requirements is questionable because: a, they will loose marketability - some people just won't buy noisy powerhogs even though they could afford these cards otherwise and want the performance they offer; and b, transferability of technology might become more of a problem the wider the gulf becomes between the high-end desktop GPUs and mobile or lower cost parts, leading to overall increased development and design cost.

In short, a 120 watt desktop GPU is probably not a suitable basis to develop a portable GPU, nor necessarily a good starting point for a low-cost part. Increasing overall powerdraw of the system is a severe liability when it comes to reducing size and noise - something consumers are actually requesting and are demonstrably willing to pay for.

There are ways of dealing with this - designing powersaving into all parts of a GPU - process tech/gate level/regulation of functional blocks/overall design, all of these have to be actively pursued. But although we could argue about where marketplace limits lie, increasing power draw just isn't a sustainable strategy. Nor, IMHO, desireable to push to its limits.

Intel seems to be one of the few places where they think long-term about the future of the platform. None of their forward looking design studies appear to support vastly increased power draw. And although they have acknowledged that GPUs power needs have increased, PCI Express imposes a 60W limit, and PCI Express is likely to be around for a very long time.

The 120 W figure on the other hand gives some insight into where at least one gfx-leader is heading. It may be the direction GPUs will go, Intels plans notwithstanding. The marketplace will likely decide the issue. And as I said before, we're probably fairly indicative of the high-end gfx market. How far are we willing to support increased power requirements?

For me - no further.

Entropy
 
What bothers me with ever increasing power requirements of GPUs (and for that matter, CPUs) is that it goes against other major trends in computing.

Which major trends? My newest HD is louder, consumes more power and is hotter then the one it replaces. My mobo chipsets are hotter and draw more power then their predecessors. My optical drive is louder and draws more power then the one it replaced. My RAM draws more power and runs a lot hotter then what it replaces. About the only area I reduced noise and heat was in my PowerSupply, and that was because I spent extra to get a cooler running PS. When you already have half a dozen fans in your rig to keep the non GPU/CPU components at solid temperatures, why should it surprise anyone when you need another half dozen for the two most important components ;)

The trend right now is faster, louder, hotter. There is a growing niche of enthusiasts that want quiet operation, but that certainly isn't the trend in terms of most products hitting the market. Quiet and cool is nice and all, but if the choice comes down to superior performance or a cooler/quieter solution, bring the noise ;)
 
Entropy said:
There are ways of dealing with this - designing powersaving into all parts of a GPU - process tech/gate level/regulation of functional blocks/overall design, all of these have to be actively pursued. But although we could argue about where marketplace limits lie, increasing power draw just isn't a sustainable strategy. Nor, IMHO, desireable to push to its limits.

That might be fine for a part not designed for optimal performance, but I don't want transistors wasted and clock gating on my supposed high performance part. Do you think Ferraris, Dragsters, etc are designed for fuel efficiency or sound level?

And if you want to talk scalability, power is one of the few places where we can continue to scale alot. Sooner or later, we will run up against physical limits in terms of how small we can make transistors, and how power efficient we can make them. When we reach this level, we can no longer scale by making things smaller and use less power at equivalent performance. Instead, we simply build them bigger, which means hotter and more power draw.

That is the fundamental physical fact. There will come a day when our process size won't get any smaller, but we will continue to make bigger and bigger dies (possibly stacked 3-dimensional chips) that soak up more and more power and radiate more and more heat. Fortunately for us, power is something that is relatively abundant for us at the moment, in comparison to the devices we are talking about.

When my GPU requires a megawatt to run, maybe I'll start to worry about power usage.
 
Indeed, there is no short term limit on power consumtion per se - we can cool using phase change. The size and noise will be refrigerator class, which is manageable as long as we accept tower chassis. Of course, there will be costs associated, but compared to overall system cost, this is still reasonable for higher end PCs.

The issue is not what is possible, but what is desireable. What we want to support with our wallets, and how much we are willing to pay for it in cost/size/noise.

Entropy
 
Entropy, Intel does not particularely like GPUs of course ... crippling part of their competition, for consumer dollars, by starving their products of power makes some sense ;)
 
It would certainly screw the console market!

"New X-Box 3 -- play games and heat your room. Great for those long winter nights".
 
BenSkywalker said:
The trend right now is faster, louder, hotter. There is a growing niche of enthusiasts that want quiet operation, but that certainly isn't the trend in terms of most products hitting the market. Quiet and cool is nice and all, but if the choice comes down to superior performance or a cooler/quieter solution, bring the noise ;)

Oh no, bring the water... (pads his watercooling R300 and Athlon XP 2700+; good lads, good lads...) ;)

Once you make the - agreed - slightly radical move to watercooling you will get both high performance and quiet operation. You will be surprised just how quiet and effective two 120 mm fans running at 6 volt with a 120 x 240 mm radiator is. It's just sweet stuff... 8)
 
Entropy said:
Dropping the endless ATI vs nVidia vs AMD vs Intel vs Iraq for a moment, is this the direction you want your computers to develop?

Hmm. Maybe nVidia is planning on dropping the 120W "Mother of All GeForces" on Iraq. I hear it is as powerful as a small nuclear weapon...
 
Once you make the - agreed - slightly radical move to watercooling you will get both high performance and quiet operation. You will be surprised just how quiet and effective two 120 mm fans running at 6 volt with a 120 x 240 mm radiator is. It's just sweet stuff...

I took a long look at water cooling when building my current rig. Ended up going with a XaserIII case instead. With the fan controls on the machine turning the fans down a bit it isn't too loud at all, only time I crank them up is when I'm gaming which is when I have the volume turned up on my speakers anyway.

That said, how much of a pain in the ass was it to set up? Not looking to move to water right now, have my XP @2.1GHZ and my off die CPU temps ~@45C under load and I am at my RAMs limit and ran out of multipliers :( My ambient case temp right now is looking very good with my HDs running at a couple degrees over room temp, of course this is running a dozen fans ;)

Which H2O setup are you running and what are your temps like?
 
???

A question what are the transistor count and wattages on NV30/35 and R300/350 and shouldn´t 0.13 require less power per mil transistors.

Just curious that´s all.
 
Back
Top