How much more than 800MHz though? With a starting point of 745MHz going up to just over 800MHz isn't terribly impressive, IMO.
The starting point is 675MHz, not 745.
How much more than 800MHz though? With a starting point of 745MHz going up to just over 800MHz isn't terribly impressive, IMO.
The starting point is 675MHz, not 745.
So the default clocks shown by ATI tool in Shamino's piece are not default?
They should know better than to post that today if it's really true.....
Sure, it does require hardware support, but to clarify what I meant... you already need to be able to run lower clocks for debug so most of the hardware support should be a given. Then the only issue is making it dynamic. That's why I'm thinking it's mostly a software issue.Yeah, it definitely should help by lowering voltage, as I was reminded privately. I wouldn't say it's pure software though, power states might also require some hardware I guess.
The starting point is 675MHz, not 745.
Obviously not! Thanks.MSI showed HDCP and HDMI support on some 8500 and 8600 models at CeBit, remember ?
Wonder how the GT fares in OCing.
Nope, since in both cases the speeds are effective. 3 and 4 clock the DRAM core differently, as you remember, but the effective frequency is still the same.Also, in reality, isn't GDDR3 at said speed faster than GDDR4 at same speed because of the way bits are divided (or something?)
WE GOT UPDATES on the possible delay of G84 and G86. It looks like the April 17th date is on, but if they are going to launch with parts availabile on that day, you might be better off avoiding them.
The problem as we understand it is the 2D modes are not clocked down to where they should be. NV has a 2D clock that is a lot lower than the 3D clock which saves battery power and in general, makes things run cooler and quieter.
The bug we are told prevents them from clocking 2D down to a level lower than 3D increasing power substantially. Basically, when the parts should be clocked down in 2D where they spend most of their life, they instead run flat out.
Well, higher latency does require more GPU die area, as you need bigger FIFO buffers, and have to have more pixels in flight to hide the latency for read-modify-write operations (blending).you expect higher latency though with GDDR4. but GPUs don't care much. CPUs want low latency/low bandwith (that's why DDR2 fared worse than DDR1 at the beginning), GPUs are pleased with high latency/high bandwith.