NVIDIA: Beyond G80...

are there any rumors if there will be an 8800 based on 80nm process and gddr4, hopefully this summer?

I'd think Ultra gives a pretty big tip on that question. If they were close with 80nm they wouldn't have given us the Ultra they did, and more than a few months at this point would seem much more likely to be 65nm territory.
 
I`ve found this on the German Forum 3D Center.... It seems like cut of interview....

"
http://developer.download.nvidia.com...in32_linux.zip

>Q: Does CUDA support Double Precision Floating Point arithmetic?

>A: CUDA supports the C "double" data type. However on G80
> (e.g. GeForce 8800) GPUs, these types will get demoted to 32-bit
> floats. NVIDIA GPUs supporting double precision in hardware will
> become available in late 2007.
"

It`s maybe about G9x series.... Can anybody explain what does it exactly mean- raises performance or IQ?? And GF8 doesn`t support this at now?? And what exactly is Double Precision Floating Point arithmetic??

THX a lot for all answers :)
 
Double precision arithmetic is 64-bit floating point. I don't see how this would be beneficial at all for 3D graphics, but would be highly useful for GPGPU applications. It may be useful, for instance, for computing gameplay physics on the card. But it would be vastly more useful for offline computation.
 
Double precision arithmetic is 64-bit floating point. I don't see how this would be beneficial at all for 3D graphics, but would be highly useful for GPGPU applications. It may be useful, for instance, for computing gameplay physics on the card. But it would be vastly more useful for offline computation.

how about offline online rendering :cool:
 
Double precision arithmetic is 64-bit floating point. I don't see how this would be beneficial at all for 3D graphics
Summed-area tables :) And remember, people say/said the same things about fp32 filtering and MSAA but there are already good uses for those.

... but would be highly useful for GPGPU applications. It may be useful, for instance, for computing gameplay physics on the card. But it would be vastly more useful for offline computation.
I do agree with you that it is a lot more useful in the offline world, and I wouldn't be at all surprised if it's a Quadro/CUDA only feature (unfortunately), even though HLSL actually has a double type, making it possible to support for graphics (although IIRC there are no double storage types...).
 
I expect G90 to have as many ALUs as G80. higher yields, higher frequency and more importantly lower power should be nice enough features.
 
it seems video card performance, at this generation at least, is influenced largely by memory speed. From this I would suspect we will likely see an eager movement to gddr5, and likely greater than 1024 bit access. Much higher ram access will likely give us ability to have greater normal mapped games.
 
I expect G90 to have as many ALUs as G80. higher yields, higher frequency and more importantly lower power should be nice enough features.

Yeah I think so too - had to be some reason they invested in high shader clocks. My conservative guess is 750 core, 1.8Ghz shader.
 
it seems video card performance, at this generation at least, is influenced largely by memory speed. From this I would suspect we will likely see an eager movement to gddr5, and likely greater than 1024 bit access. Much higher ram access will likely give us ability to have greater normal mapped games.

I don't think normal mapping is much of a problem for current hardware/consoles ;)
 
I expect G90 to have as many ALUs as G80. higher yields, higher frequency and more importantly lower power should be nice enough features.

Well, i think G90 will be to G80 as GF7800 was to GF6800.... More improved Shader Processors (192 is very likely), 512-bit MC, much higher frequencies (about 700-800mhz), 65nm and other minor architectural improvements.... Imo there will be about 50% performance bump in comparison to G80.... I wonder if there will be 10.1 support too....

I hope NVIDIA will launch it in October/November this year....
 
Well, i think G90 will be to G80 as GF7800 was to GF6800.... More improved Shader Processors (192 is very likely), 512-bit MC, much higher frequencies (about 700-800mhz), 65nm and other minor architectural improvements.... Imo there will be about 50% performance bump in comparison to G80.... I wonder if there will be 10.1 support too....

I hope NVIDIA will launch it in October/November this year....

If this G80 to G90 transition is to be anything like the NV40/NV45 to G70 one, then the performance improvement will have to be closer to 90~100%.
The 7800 GTX (256MB) was in fact faster than two SLI'd 6800 Ultras.
 
If this G80 to G90 transition is to be anything like the NV40/NV45 to G70 one, then the performance improvement will have to be closer to 90~100%.
The 7800 GTX (256MB) was in fact faster than two SLI'd 6800 Ultras.

Well that could be quite a refresh.

90nm --> 65nm
575Mhz --> 700Mhz +/- 50Mhz?
128 SP --> 192 SP
384bit --> 512bit? (not sure about this but i think its a must since ATi has the 512bit advantage regardless of how beneficial it is in real life performance advantage from using such a wide and expensive bus)
768mb --> 1024mb?
32 TMUs --> ?
24 ROPs --> ?
Probably the improved Video engine that the G86/G84 have would be nice too,

Assuming other tweaks and changes for more efficency it would be pretty big jump from G80 i guess in terms of performance unless their going for a G71 type of refresh where only minor performance increase are found e.g clock speeds while decreasing the die size, lowering power consumption and the GPU being more cooler.
 
Well that could be quite a refresh.

90nm --> 65nm
575Mhz --> 700Mhz +/- 50Mhz?
128 SP --> 192 SP
384bit --> 512bit? (not sure about this but i think its a must since ATi has the 512bit advantage regardless of how beneficial it is in real life performance advantage from using such a wide and expensive bus)
768mb --> 1024mb?
32 TMUs --> ?
24 ROPs --> ?
Probably the improved Video engine that the G86/G84 have would be nice too,

Assuming other tweaks and changes for more efficency it would be pretty big jump from G80 i guess in terms of performance unless their going for a G71 type of refresh where only minor performance increase are found e.g clock speeds while decreasing the die size, lowering power consumption and the GPU being more cooler.

What keeps them from going after the middle-ground (448bit bus and 80nm half-node process) ?
If 320bit and 384bit already sound weird, why not 448bit ?
 
What keeps them from going after the middle-ground (448bit bus and 80nm half-node process) ?
If 320bit and 384bit already sound weird, why not 448bit ?


how much space do they need for full 64 bit support? It was kinda hinted at this, unless the 80nm transition is just for an upclocked g80.
 
I'm sure Gelato guys can't wait for that.. :)
I don't remember the exact wording of the question, but a couple years back at Siggraph I asked Larry Gritz if 64 bit precision was desired for Gelato and he said no. Of course he might change his mind, but it wasn't one of the features at the top of his wish list.
 
Bandwidth in General isn't that much of a problem

fair enough. But if the 8800 can already translate one polygon a cycle, including bump mapped textures, what is the point in increasing the frequency over 575mhz, besides greater than 16x aa/at or 128bit color?
 
Back
Top