VR-Zone: G96, G94 and G92 @ 55nm

Don't rumours indicate there'll be RV730 and RV740? If so, 128-bit and 256-bit?

If RV740 is a cut-down RV770, e.g. 6 clusters and no CrossFireX port, it should be about the same die size as G94b I guess.

:LOL: we might see RV740, if it is indeed this spec, being very close in performance to HD4850. Sort of a re-run of the way 9600GT ate into 8800GT.

Jawed
 
Why do you think G94b couldn't keep up with rv730? I wouldn't expect the latter (in a 4 simd configuration) to be any faster than rv670 (at similar clocks) - in fact it should be much slower in some areas (like the half-rate fp16 texture filtering).

Hmmm you may be right. I underestimated how well G94 does versus RV670 today. Just looked at some recent benchmarks and it competes quite well (I blame Nvidia's mess of a lineup for my lowered opinion of the 9600GT).
 
Some improvements in ressource utilization, better scheduling, larger register file (even more crucial IMO the less ALUs you have), maybe even recovery of the lost MUL?
 
Most likely, since it is just hair slower then 20% higher clocked 8600 GTS.
Doesn't look like that to me, seems to perform almost identical to 8600 clock for clock. Ram and core clock are indeed ~20% lower compared to the GTS, but shader clock (which is probably the most important) is almost the same. And that's right where it performs, ranging from 2 to 15 percent slower usually. Of course, someone should just clock them the same and test that...
 
Doesn't look like that to me, seems to perform almost identical to 8600 clock for clock. Ram and core clock are indeed ~20% lower compared to the GTS, but shader clock (which is probably the most important) is almost the same. And that's right where it performs, ranging from 2 to 15 percent slower usually. Of course, someone should just clock them the same and test that...

Well, OTOH it's consistently quite a bit faster in our tests than the overclocked 8600GT we're pitting it against:
http://www.pcgameshardware.de/aid,653855/Test/Benchmark/Geforce_9500_GT_im_PCGH-Test/
(even though it's in german, you might want to take a peek at the benchmark-bars' universal language).

Judging from a quick glance (I didn't calculate an average...) I'd say it's about 25 percent faster on average. Both where tested using the same drivers btw.
 
Well, OTOH it's consistently quite a bit faster in our tests than the overclocked 8600GT we're pitting it against:
http://www.pcgameshardware.de/aid,653855/Test/Benchmark/Geforce_9500_GT_im_PCGH-Test/
(even though it's in german, you might want to take a peek at the benchmark-bars' universal language).
Well that's not really overclocked 8600GT - only core clock, not memory nor shader clock (stupid, should have increased at least the shader clock too - what manufacturer is that?)
Judging from a quick glance (I didn't calculate an average...) I'd say it's about 25 percent faster on average. Both where tested using the same drivers btw.
Well I'd expect performance mostly to scale with shader clock, and the 9500GT has a 16% advantage there. That's not to say core clock couldn't play any role, but I'd suspect even the memory clock advantage the 9500GT also has could be more important.
25% would be more performance difference than what could be explained by clock differences, but my quick glance at these results show more a difference like 15%-20%.
So for now that's not enough to convince me it's any faster clock per clock than the old chip (though it could be, but I doubt it's much).
 
Well, OTOH it's consistently quite a bit faster in our tests than the overclocked 8600GT we're pitting it against:
http://www.pcgameshardware.de/aid,653855/Test/Benchmark/Geforce_9500_GT_im_PCGH-Test/
(even though it's in german, you might want to take a peek at the benchmark-bars' universal language).

Judging from a quick glance (I didn't calculate an average...) I'd say it's about 25 percent faster on average. Both where tested using the same drivers btw.

Why in the hell is the 9500 GT faster than the HD 3850 is COD4? O_O
 
Some improvements in ressource utilization, better scheduling, larger register file (even more crucial IMO the less ALUs you have), maybe even recovery of the lost MUL?

larger register file and better use of the MUL, those are features of GT200. The chip would have been a GT2xx (or G2xx), not G96.
 
http://www.firingsquad.com/hardware/nvidia_geforce_9500_gt_performance/page2.asp

However NVIDIA has incorporated several tweaks into their G9x GPUs in comparison to the G8x generation. For starters there’s PCI Express 2.0. PCIe 2.0 offers double the bandwidth of PCIe 1.1; 8.0GB/sec in each direction, providing a total of 16GB/sec of total bandwidth

In addition, G96 (like G92 before it), boasts improved color and z-compression over G84. This allows the GPU to make more efficient use of its available memory bandwidth, and should help the most at high resolutions, particularly once AA is applied.

Where did the rest of the extra transistors go? Besides the aforementioned improvements, G96 also adds support for DisplayPort displays. We’ve also been told that transistor counts can vary between each manufacturing process, so expect some variance there.
Nothing about ALU/Register file.

The core overclocks extremely well. The article also notes that NVidia has provided a reference overclocked card, something it normally doesn't do. So, we'll be seeing overclocked OC cards too?

Jawed
 
Well that's not really overclocked 8600GT - only core clock, not memory nor shader clock (stupid, should have increased at least the shader clock too - what manufacturer is that?)
It was a model from Gigabyte.

Why in the hell is the 9500 GT faster than the HD 3850 is COD4? O_O
It's the Level which was already present in the demo - which has become our benchmark since then. Seems the radeons do not like this level for some reason. ixbt.com see the same - radeons performing not up to par: http://www.ixbt.com/video/itogi-video/test/0806_9800gx2.shtml (you might need to scroll a bit)

256 MB for the 3850, 512MB for the 9500GT.
True, but not a factor in this case.


larger register file and better use of the MUL, those are features of GT200. The chip would have been a GT2xx (or G2xx), not G96.
I am aware of that. But OTOH there already was 'some of the MUL' exposed in 8500GT. So why should other new chips not have some kind of improvement which they can easily fit into their die space budget?
 
Back
Top