NVIDIA: Beyond G80...

In the games where the stock Ultra is slower than the overclocked GTX, it seems that the higher core clock frequency of 625Mhz for the OC'd GTX (vs 612Mhz for the stock Ultra) is playing a stronger role than the higher shader domain clock on the stock Ultra vs OC'd GTX. Presumably, if the Ultra was run at a core clock of 625Mhz or higher, it would be equal or ahead in all cases.
 
Well I guess that's presuming that overclocking the GTX to 625Mhz didn't take the shaders up with it too. So then it would be a case where the extra bandwidth on the Ultra isn't compensating for that difference. It will probably embarrass even the overclocked GTX's if it can really attain 700+Mhz on the core though - that remains to be seen.
 
Well I guess that's presuming that overclocking the GTX to 625Mhz didn't take the shaders up with it too. So then it would be a case where the extra bandwidth on the Ultra isn't compensating for that difference. It will probably embarrass even the overclocked GTX's if it can really attain 700+Mhz on the core though - that remains to be seen.

I read somewhere that the shader clock domain was tied to the memory bus in some way, rather than the main clock.
 
Yeah, that sounds right. The core clock and the memory clock are tied together as these frequencies are adjusted. I can only speculate that the reason they did that was to keep the architecture balanced in terms of core clock speed vs memory bandwith available.

So on the G80, it's easy to see why NVIDIA would feel no pressing need to go with a 512 bit memory bus width, since the core clock frequency would have to be much much higher in order to take good advantage of that extra bandwith while still maintaining a balanced architecture.
 
So on the G80, it's easy to see why NVIDIA would feel no pressing need to go with a 512 bit memory bus width, since the core clock frequency would have to be much much higher in order to take good advantage of that extra bandwith while still maintaining a balanced architecture.
Yup. And it is conceivable that they could simply go for faster memory before going for a wider bus. It wouldn't surprise me if we don't see them go for a 512-bit bus for a little while yet.
 
Water cooled GTX @630/1030 MHz for 599 EUR on alternate.de, so I still think the Ultra retail price will be about 600-650 EUR/$ in 2-3 weeks.
 
Yup, and technically the Ultra is not really "Beyond G80" anyway, so off to the other thread :)

Well, if you recall the early theories on the matter, that wasn't consensus at the time. So it really wasn't offtopic until proven to be just a suped-up G80. . .but now that it has been proven. . .etc.
 
Guys,

When, in your opinion, will a GPU consist of two separate chips:
a 'graphics chip' (responsible for 'visuals')
a 'CPU' chip?

With GPGPU concept taking off you would think that it will make sense, at some point. Maybe we will see that approach in G100?
 
Guys,

When, in your opinion, will a GPU consist of two separate chips:
a 'graphics chip' (responsible for 'visuals')
a 'CPU' chip?

With GPGPU concept taking off you would think that it will make sense, at some point. Maybe we will see that approach in G100?

That'd be called fusion and you'd have to look on the other side of the fence on a timeframe shorter than G100.
 
Back
Top