Geforce FX question

Well in those days, everything was 64-bit until nvidia brought out TNT with 128-bit. S3 (and others?) battled on with 64-bit and were reasonably close in performance (but still losers, of course) in that generation because Savage3D and Savage4 were very efficient cores. To compete everyone had to move to 128-bit despite the extra cost involved.

I imagine the same thing will happen here. I don't think there's such a thing as too much bandwidth.
 
Dio said:
Well in those days, everything was 64-bit until nvidia brought out TNT with 128-bit. S3 (and others?) battled on with 64-bit and were reasonably close in performance (but still losers, of course) in that generation because Savage3D and Savage4 were very efficient cores. To compete everyone had to move to 128-bit despite the extra cost involved.

I imagine the same thing will happen here. I don't think there's such a thing as too much bandwidth.

Thanks!

So in other words, there is in fact historical preference. 256-bit bus is the way of the future, for sure.
 
Well in those days, everything was 64-bit until nvidia brought out TNT with 128-bit. S3 (and others?) battled on with 64-bit and were reasonably close in performance (but still losers, of course)...

Actually, most vendors migrated to 128 bits at roughly the same time.

Nvidia Rive 128 was 128 bit bus (TNT was not their first), 3dfx Banshee was 128 bit, ATI Rage 128, #9 Imagine 128....

3dfx Voodoo graphics was also 128 bit, though because it was dual chip, it's not really apples to apples...
 
I agree. nvidia were the first, and I'm not sure about G200. I didn't know Riva 128 had 128 bit memory, I thought it was marchitecture.
 
The delay of the nv30 illustrates better than any theory ever could the glaring holes in nVidia's technology roadmap at present:

(1) They are overly dependent on the technology roll out of the FAB guys--nVidia was hoping for low-k dialectrics, among other things, to provide its clockspeed and ultimate performance.

Contrast: ATI used its in-house engineering resources along with a .15 micron process to do what nVidia flatly stated it could not do at .15 microns.

(2) They are overly dependent on bandwidth performance as designed by outside ram manufacturers.

Contrast: ATI used its in-house engineering to build in a 256-bit bus for the 9700P-R300 series of products, thus curtting the company loose from the same sort of dependencies that have hobbled nVidia in the bandwidth department.

Unsurprising Conclusion: R300 products have been shipping for five months and nVidia DX9-class chips are shipping....?????

Despite what is now past history with DX8 parts and earlier, in the move to the DX9 G/VPU, ATI has demonstrated a marked superiority in both design and execution. At this point it is as senseless to ask if ATI will keep it up as it is to ask if nVidia can catch up. The next six months or so should tell the tale.
 
also do not forget that Carmack mentiponed multi-chip solutions entering consumer market again - probably even this year. and i don't want to sound like a carmack whore, but i think his "inside info" might have some weight. remember him demanding more per-pixel precision (64bit at least), when he said that, all of us laughed. and now here we have floating point graphics chips.
well, there is another explanation. what if today's big and complex chips (100+ million) will be divided in more smaller units like separate geometry and F/X processor? (yes, we've seen that in the past, too)
 
quattro said:
also do not forget that Carmack mentiponed multi-chip solutions entering consumer market again

No, he didn't say that they would be entering the consumer market again. He just said that we might see some. They will probably not be for the consumer market.
 
Contrast: ATI used its in-house engineering resources along with a .15 micron process to do what nVidia flatly stated it could not do at .15 microns.

I don't think that Nvidia has every stated that they couldn't do a DX9 part using a 0.15 micron process. Only that they couldn't make the NV30 with that process. And that's probably correct since they're aiming for 500 MHz.
 
And now they need a 12 layer PCB as well. What are they up to? Which is producing the most signal noise the NV30 core or the DDR-II?
 
BoardBonobo said:
And now they need a 12 layer PCB as well. What are they up to? Which is producing the most signal noise the NV30 core or the DDR-II?
One article i read said it seems that the higher frequency actually causes more problems than a wider bus (256 bit) would - hence, the uber-complex board design.
IF true, that certainly shoots someone's little theories down in flames.
 
unless they figure out some serious means of compression, then they are going to have to move to more bits or a higher clock rate to get more performance.

wider bus means more skew

highclock means more signal reflection and such...

definately hard to overcome both but it is doable
 
Back
Top