NVIDIA: Beyond G80...

all this news about vista and we dont even have a good xp driver yet. ut2k4 is still completely unplayable on 80% of maps for example.
 
I'm not convinced the 1:2 TA:TF ratio is a good idea: the second set of TFs seem to be idle too much of the time.

If the transistor budget for G84 allows a 1:1 ratio, why not for a future high-end GPU? I can't help thinking that the ratio in G80 was a victim of transistor budget at 90nm.

Jawed
 
all this news about vista and we dont even have a good xp driver yet. ut2k4 is still completely unplayable on 80% of maps for example.

Strange, because I was planning UT2k4 with my 8800gtx when it was released, and it was all working perfectly. That was in XP as there were no dx10 drivers at the time.

Up to now the only games I've had problems with is NFS - carbon and Stalker on Vista x64.
 
Strange, because I was planning UT2k4 with my 8800gtx when it was released, and it was all working perfectly. That was in XP as there were no dx10 drivers at the time.

Up to now the only games I've had problems with is NFS - carbon and Stalker on Vista x64.

I second that on UT2004. It works completely flawlessly for me on a GTS with Vista32. Max everything (including CP options) at a rock solid 75fps with vsync. Thats at 1280x1024.

On Vista32 I have only had a problem with TOCA Race Driver 2 so far. Everything else has worked perfectly.
 
I fail to see how that has anything to do with CUDA... What matters there is the ALU-TEX ratio more than anything, and since that'd increase the number of transistors allocated to TEX, it'd be more of a negative than a positive...

I'm not a chip-architect and all but i could imagine, just adding adress-units to already present filtering-units could be fairly cheap in terms of transistor count.

And you'd double the possible Data-input your ALUs can get out of textures which are, i believe, used in some GPGPU-apps as Data-Containers. Also, I'm not sure as to whether a FP16-fetch (requiring 2 cycles) blocks the adress unit until it is completed. If that's the case, scientific stuff needing very precise data should profit a lot from the additional input.
 
I second that on UT2004. It works completely flawlessly for me on a GTS with Vista32. Max everything (including CP options) at a rock solid 75fps with vsync. Thats at 1280x1024.

On Vista32 I have only had a problem with TOCA Race Driver 2 so far. Everything else has worked perfectly.

Third it for UT04... it was one of first games I fired up (at 1600x1200, 8xAA) when I got my GTX. Maybe there are some custom maps that are having problems? I played in 60+ maps with no issues. Does a newer driver break something?
 
I second that on UT2004. It works completely flawlessly for me on a GTS with Vista32. Max everything (including CP options) at a rock solid 75fps with vsync. Thats at 1280x1024.


3rd that on 64bit Vista, played it last night, no problems here. 16x12 maxed out, didn't check frame rates, but no lag what so ever
 
the problem only exists on xp, and its a very known issue. the stuttering is so bad its like your playing at 5 fps the majority of the time. a few people dont have the issue, but many people do. when u combine this with the netcode, the playability goes right out the window.
 
AMD has nothing to compete with G80, but they (and ironically enough, Nvidia themselves) have plenty to compete with G84. The reason why G80 was a success had a lot less to do with DX10 then with utterly overwhelming performance advantage. It was a no-compromise product, if you could afford it. G84, not so much.

What I meant was that both G80 and G84 when launched had no DirectX10 competition, so vendors were more than happy to keep margins high. No doubt, I expect G84 prices to come down quickly because of the price-to-performance ratio, whereas G80 prices have held up because it is still after all these months clearly the highest performance card with full DirectX10 support too and no competiton in these arenas.

I still can't help but think that there must be something placed in between 8600 GTS and 8800 GTS, namely a 8700 GTS with 64 stream processors, 256 bit bus, and 256/512MB RAM.

As for refresh for G80, one can't help but take a look at G84 which has in some cases higher efficiency than the G80. This makes me believe that a G80 refresh will involve not only higher core/shader/memory clocks than the G80, but higher efficiency as well in some areas. With constantly improving drivers, it should offer a pretty significant boost in performance compared to the initial 8800 GTX reviews. I also expect it to come on 80nm process, and to incorporate the enhanced video processing capabilities found on the 8600 cards.
 
Last edited by a moderator:
This picture of the 9800 was posted over at NvNews. Clever photoshop work:?:


geforce9800ed3.jpg
 
No one has noticed that 8800 Ultra specs have been unveilved at Dell's website? 650Mhz core, 2.16Ghz memory, 384 bit bus, 128 stream processors, 768 MB GDDR3 RAM.

Core and memory speeds are more in line with 8600 GTS than 8800 GTX. If this is on 80nm with some of the efficiency improvements found on the G84 and the new video processing engine, then it might be a decent improvement over 8800 GTX. Given that the name is still 8800 , I never expected it to be a very radical change from the 8800 GTX.
 
Hopefully this is the 256-bit 64SP one. For that I would pay even present 8600GTS prices! :D

The mere fact that it is 8800 GS makes me feel that this would be 64 sp's (and 256 bit bus), as opposed to something lower like 48 sp's. Then it will be effectively half the sp's of the 8800 GTX, which seems quite plausible for a lower end 8800 series card.
 
Back
Top