Seems normal:
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/1.html (yup, that's the tpu review
Asymmetric SIMDs?The organization of the SMX's execution units isn't truly apparent in the diagram above. Although Nvidia likes to talk about them as individual "cores," the ALUs are actually grouped into execution units of varying widths. In the SMX, there are four 16-ALU-wide vector execution units and four 32-wide units. Each of the four schedulers in the diagram above is associated with one vec16 unit and one vec32 unit. There are eight special function units per scheduler to handle, well, special math functions like transcendentals and interpolation. (Incidentally, the partial use of vec32 units is apparently how the GF114 got to have 48 ALUs in its SM, a detail Alben let slip that we hadn't realized before.)
Although each of the SMX's execution units works on multiple data simultaneously according to its width—and we've called them vector units as a result—work is scheduled on them according to Nvidia's customary scheme, in which the elements of a pixel or thread are processed sequentially on a single ALU. (AMD has recently adopted a similar scheduling format in its GCN architecture.) As in the past, Nvidia schedules its work in groups of 32 pixels or threads known as "warps." Those vec32 units should be able to output a completed warp in each clock cycle, while the vec16 units and SFUs will require multiple clocks to output a warp.
The Adaptive Vsync is a true bliss here -- zero input lag, no tearing and no stuttering. No need for triple buffering (wasted memory), too.
TPU 7900-numbers are from launch drivers, though
hmm then 256CCs per SMX for biggie ?Now, that's weird (from TR's review):
Asymmetric SIMDs?
nvidia's are not newest too
NVIDIA Releases the 301.10 WHQL Driver for the GeForce GTX 680
Drivers:
NVIDIA: 285.62
ATI: Catalyst 11.12
HD 7950 & 7970: 8.921.2 RC11
HD 7750 & HD 7770: 8.932.2
GTX 680: 300.99
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/6.html
Its already available at geforce.com
Probably. And this way the big Kepler won't need to double the number of those fat SMs - just four more would be enough.hmm then 256CCs per SMX for biggie ?
It does draw a as to why tweaktown's benchmarks and power consumption numbers are different from the other reviews. If you read the comments he does say that they didn't receive a card from them and he's not under NDA. Tweaktown's review shows higher power consumption for the GTX 680. So will the retail card's performance and power consumption actually be different then what's reviewed?
301.10 came out today or so. 300.99 is the intended review driver (which sped up Dirt 3 in our test on a GTX 580 a bit as well).nvidia's are not newest too
NVIDIA Releases the 301.10 WHQL Driver for the GeForce GTX 680
Drivers:
NVIDIA: 285.62
ATI: Catalyst 11.12
HD 7950 & 7970: 8.921.2 RC11
HD 7750 & HD 7770: 8.932.2
GTX 680: 300.99
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/6.html
Big difference in implementation. PowerTune being a calculated "infered" power at the chip level we can choose to implement it in a deterministic or non-deterministic manner; initial implementations we have chosen deterministic in order to ensure all cards deliver the same performance regardless of the ASIC characteristics or the environment it is in. With the Boost implementation here it appears to be using the GTX 580 input power draw circuitry, which meas it is entirely non-determinisic and you can see different performances with different chip characteristics and even by the conditions the end user is running.With the limited amount of info available, I don't see how GPU Boost differs from PowerTune?