The Official G84/G86 Rumours & Speculation Thread

Yeah, it's weird. I've seen 675, 710 and now 720 MHz. But the "OC models" will probably be clocked higher than that. Hopefully we'll know pretty soon. The next two months will rock. :smile:
 
Its possible because G80 doesnt seem to have 2d/3d clocks either.
Indeed, and here's some food for thought: what's the point of 2D clocks when you can likely disable/power-down 80%+ of your GPU when idling? If not more... As such, it seems likely to me that much of the remaining power consumption is likely nearly invariant of clock speed (static leakage etc.), although I could be wrong of course!
 
Indeed, and here's some food for thought: what's the point of 2D clocks when you can likely disable/power-down 80%+ of your GPU when idling? If not more... As such, it seems likely to me that much of the remaining power consumption is likely nearly invariant of clock speed (static leakage etc.), although I could be wrong of course!

Of course, but hold on, do current GPUs use such power saving techniques? (look at the bolded part above). I thought this is why they employed 2d/3d clocks, because IMO GPUs get refreshed in much shorter cycles than CPUs (with the GPU architecture being completely changed almost within 2~1 years or less) they cannot spend much resources into power saving techniques such as intel/AMD could.

Although the CPU side of things are getting a bit of a treatment in that area especially with the upcoming penryn, GPUs unfortunately need some.

I still remember the 7900GT only consuming 47W at load. Impressive engineering from nVIDIA if you ask me. How they were able to cut down the number of transistors, reduce die size thanks to the move to 90nm process, and other techniques that were employed to get that sort of result. (X1900GT consumed 71W~ and all numbers from xbitlabs)

8600GTS uses stock single HSF setup. Big thumbs up because when your looking at the leaked pics of R630XT it looks like ATi still hasnt focused enough on power/heat department.
 
IMO GPUs get refreshed in much shorter cycles than CPUs (with the GPU architecture being completely changed almost within 2~1 years or less) they cannot spend much resources into power saving techniques such as intel/AMD could.
You know, that wasn't true back then and it's even less true today. There's a clear lineage from the TNT to the NV30, just like there's a clear lineage from the K7 (arguably K6?) to the K10. The same is true for 3DFX, Intel, and many others.

The difference between CPUs and GPUs is more that GPUs have more iterations of the same architecture. CPUs tend to just have minor derivatives based on the cache size and, more recently, number of cores. GPUs need to evolve to add new features, and also hit the TSMC/UMC half-nodes including 150nm, 110nm and 80nm. Intel and AMD, on the other hand, only focus on the full nodes, and there are many reasons why that makes sense.

Also, if you exclude code morphing techniques, it's not like it's easy to come up with a new CPU architecture that's really better than what you could come up with by evolving your previous architecture. Unless, of course, your previous architecture just couldn't possibly be evolved because its design goals are incredibly different from what you'd want to do today. The Pentium 4 is a fairly obvious example there.
Of course, but hold on, do current GPUs use such power saving techniques? (look at the bolded part above). I thought this is why they employed 2d/3d clocks, because [...]
2D/3D clocks were introduced by NVIDIA for the GeForce FX Series. Back then, GPUs were deep, but not anywhere as wide as they are today. The NV3x only had one 'pipeline', or more precisely one quad pipeline for NV30/NV35 and one half-quad-pipeline for the lower-end derivatives. Now, you tell me how to disable *part* of one pipeline! :p (without disabling specific ALUs in it, which wouldn't do miracles and might be much more complex!)

Now, look at G8x. You've got 8 clusters, and each cluster has two 'multiprocessors' (aka ALU blocks) and one quad-pipeline TMU. You've also got 6 quad-ROPs, and each of those is directly associated with one memory channel. Now, think about what might happen when under Vista Aero. Disable 7 clusters, and maybe even one of the two multiprocessors in that cluster and the interpolation/SFU unit. Disable several ROPs based on how much video memory you need. And since Aero never works at 250FPS, much of the time you could power-down everything else on the GPU too.

That's not complicated. It's easy as hell. As Jen-Hsun would say, it's even easier than walking two miles! ;) (bonus points to whoever remembers where that quote is from...) - and honestly, I don't know if it works like that, but you'd seriously expect it to. You can apply roughly the same principles to G7x, R5xx and R6xx of course. Although being wider helps, obviously.

And I'll admit not to really understand people who say GPUs are bad in terms of power consumption. AMD and NVIDIA GPUs are just fine in terms of power consumption given their die size and the amount of high-voltage GDDR memory that must be considered in the TDP calculation. Is there some room for improvement? Sure. But there is with CPUs too, and I don't think it's so obvious where there is the most room for improvement either. Custom logic helps CPUs for a given level of performance, but there's no magic there. The GPUs certainly have an image problem in terms of power consumption in the mainstream, though.
 
Looks like NV pushing the clocks to the limit (they are scared for rv630? ;) ), or this is a factory OC version.
Coolaler tested the 8600GTS card and that card has the rumored 675/2000mhz clock speeds.

Actually, the guy that provided me with that jpeg also told me that it was a factory OC'ed card, and that the RV630 isn't nearly a threat to the correspondent G84 model, even at stock speeds.
As i said, it's hear say, so take it with a grain of salt.
 
Actually, the guy that provided me with that jpeg also told me that it was a factory OC'ed card, and that the RV630 isn't nearly a threat to the correspondent G84 model, even at stock speeds.
As i said, it's hear say, so take it with a grain of salt.

NV usually do well with their midrange cards, so that doesn't surprise me.

Btw, what's the 8600 GTS retail price? $200?
 
Indeed, and here's some food for thought: what's the point of 2D clocks when you can likely disable/power-down 80%+ of your GPU when idling? If not more... As such, it seems likely to me that much of the remaining power consumption is likely nearly invariant of clock speed (static leakage etc.), although I could be wrong of course!
The point is to save even more power. IMO, lowering clocks is mostly a software issue these days so I'd be shocked if Nvidia didn't do this in addition to other power saving techniques.
 
Besides, when we're talking about functional units that could be used for other purposes, shutting down units is only a linear performance drop, while reducing clockspeeds is super-linear.
 
The point is to save even more power. IMO, lowering clocks is mostly a software issue these days so I'd be shocked if Nvidia didn't do this in addition to other power saving techniques.
Yeah, it definitely should help by lowering voltage, as I was reminded privately. I wouldn't say it's pure software though, power states might also require some hardware I guess.
 
Back
Top