The nvidia future architecture thread (G100/GT300 and such)

Wow, that's a really nice set of data.

The ring bus patent docoument:

http://forum.beyond3d.com/showpost.php?p=1165766&postcount=1947

says that in an embodiment the ring bus runs at core clock, not memory clock, so perhaps that's it. 500MHz is enough for 76.8GB/s, and it seems 750MHz (50% faster) is enough for 115.2GB/s (50% faster) :p

At 1206MHz memory on RV670, its performance is 55% of R600. This increases to 64% at 297MHz memory compared with R600's 153MHz. A clear demonstration of the narrower bus being more efficient?

Jawed
 
G100 is GT200.

Depends if the codename represents a technology generation or a specific timeframe for release. If it's the first then GT200 is the "G200" found on older roadmaps, if it's the latter than it is truly "G100".

On quite old roadmaps G100 used to stand for a D3D11 chip, like on ATI's old roadmaps where that place was captured by the "R700".

Lord knows how old those roadmaps were and I doubt that both IHVs could have known at this stage what manufacturing process delays might have occured or what D3D11 is going to look like after all. It wasn't too long ago when ATI folks where "certain" that the Xenos/R6x0 Tesselator would be sufficient for D3D11 compliance. The recently revealed slides about 11 point into a different direction though.

Sorry for the OT.
 
Depends if the codename represents a technology generation or a specific timeframe for release. If it's the first then GT200 is the "G200" found on older roadmaps, if it's the latter than it is truly "G100".
I heard that story (from you =)) but the truth is that GT200 has G100 id in the current drivers. And always had G100 id. So it's G100 even if it's G200 and GT200 at the same time. NVs codenames are a mess right now.
 
:LOL: I was thinking that they are merely meant to confuse outsiders.
I'm not sure that's the case. No one seriously judge future chips by their codenames.
It's probably another one "end of available digits" thing.
When they made NV4x->G7x switch they where hitting NV4x limits (you can't put more than 10 chips in NV4x line and there were more than 10 chips in NV4x+G7x combined).
Now they're trying to avoid this situation in the future - you can have a hundred chips under Gxyz label where X means an architecture generation, Y - the current line-up and Z - a position of a chip in a line.
Plus some marketing of course since the new market branding will probably be somewhat closer to the codenames (GT200 -> GTX200, GT300 -> GTX300? etc)
 
I heard that story (from you =)) but the truth is that GT200 has G100 id in the current drivers. And always had G100 id. So it's G100 even if it's G200 and GT200 at the same time. NVs codenames are a mess right now.

Just because their codenames are a mess (and always have been for that matter) it doesn't mean that years ago they hadn't planned something named back then as "G100" for D3D11. Roadmaps change on many levels and you very well know that beyond one year length they're merely optimistic estimates and nothing more. In any case and that's the last OT on the matter, the story with the G8x/G2x/G1x timeline came years ago from an ATI employee.

Not that it really matters but RV770 could easily also have a "R700" driver codetag, while in reality the real "R700" would rather have a RV870 official codename.
 
What for?

pinky.jpg


To take over the world CarstenS, as they do every product cycle...
 
I'm not really sure that it's physically possible to have 512-bit bus in 40nm GT2xx GPU with 384 SPs. Probably just another more or less baseless rumour (AFAIK the original G100 project was supposed to have 384 SPs on 55nm so there may be some ground to all these 384 SPs rumours).
 
going back to something cartens said in the first post is 16x aniso really free on the gt280 ?
ive only been using 8..
 
going back to something cartens said in the first post is 16x aniso really free on the gt280 ?
ive only been using 8..

How is something like that free? But then again even on my 8800 Ultras I rarely notice any sort of performance drop at that setting. I have played all games at that setting including Crysis and Warhead.
 
How is something like that free?

Maybe not absolutely 100% free but as you note the performance hit is negligible with G80/G92 based hardware. The question is, does this hold true with GT200? The ALU/TEX ratio has increased (3 to 1 vs 2 to 1, although G80 is 1 to 2 address/filtering) so perhaps AF isn't as 'free' as it used to be.

But then again even on my 8800 Ultras I rarely notice any sort of performance drop at that setting. I have played all games at that setting including Crysis and Warhead.

The same was true for my old 8800GT, but do keep in mind AF doesn't work properly in Crysis if you play at Very High settings.
 
going back to something cartens said in the first post is 16x aniso really free on the gt280 ?
ive only been using 8..

Depends also what one exactly means with "free". AF normally doesn't tax bandwidth much but rather fillrate. If you're using default "quality" AF, the "brilinear" it uses comes virtually for free over plain blinear.

Here's an AF only test from Computerbase: http://www.computerbase.de/artikel/..._gtx_280_sli/7/#abschnitt_aa_und_afskalierung

In the worst case the 280 loses up to 18% in performance from 1xAF to high quality 16xAF and you can also see how insignificant the difference between 8x and 16xAF is in those 3 applications.
 
The same was true for my old 8800GT, but do keep in mind AF doesn't work properly in Crysis if you play at Very High settings.

Oh snap I didnt know that! But then again I have been tweaking my cvars a lot so how can you tell what exactly is very high settings? Ah anyway I am not touching that game again until I get a pair of nvidia's next gen vid cards :)
 
Back
Top