NVIDIA: Beyond G80...

As for the codename - I'd suspect if it's a 'true' optical shrink (same metal masks), then NVIDIA would seriously consider keeping the same chip codename. We'll see.

So you're expecting a shrink? What about just a souped up 90nm G80 ala 7800GTX-512?
 
So you're expecting a shrink? What about just a souped up 90nm G80 ala 7800GTX-512?
In this case, an optical shrink would probably be pin-compatible and reduce costs for the entire GeForce 8800 line-up. So it does make sense to me, but I'll openly admit that there haven't been many reports out there confirming that NVIDIA is actually doing that. Obviously, a cherry picked SKU ala the 7800GTX-512 is also a possibility.
 
i think g80 is entirely fillrate limited. i think nvidia and ati are both focusing too much on shader performance because it still seems to be fillrate performance that is holding us back.

i remember reading an article where they oc seperate parts of g80, and it was always ocing the core that increased performance the most, ocing shaders didnt rly do much of anything. they even used games that would be the most shader heavy, fear and oblivion.

if this supposed g80 ultra just has more bandwidth, i dont expect it to be more than 3 to 5% faster.

http://www.anandtech.com/video/showdoc.aspx?i=2931&p=1

theres the article
 
Last edited by a moderator:
Well considering the hardware those games were targeting you would expect that sort of behavior. It's definitely a good sign that F.E.A.R isn't maxing out G80's shader core.
 
I doubt the ROPs are a significant limiting factor in today's hardware.

well the core clock controls the texture rate, rops, and thread dispatcher, fear certainly isnt texture limited, and if it was the thread dispatcher that would be a pretty severe problem
 
I doubt the ROPs are a significant limiting factor in today's hardware.
It might actually be for F.E.A.R., although I would rather classify it as stencil fillrate. One of the only things in which the G80's power didn't grow by ridiculous proportions is stencil, as far as I can see - so it makes sense for it to become a partial bottleneck there.

In some games and for specific parts of the scene, triangle setup (or attribute fetch?) might sometimes be a bottleneck. It's not strictly unimaginable to think the TMUs might be a bottleneck if an entire area needed 16x AF, although it's probably one of the least frequent ones in practice, I'd assume.

In the end, the G80 looks fairly balanced for today's games line-up, since it looks like different bottlenecks occur in different games. There's probably some more perf/mm2 to be achieved by tweaking some ratios, but for a first iteration, it's quite good imo. It'll be interesting to see how G84/G86 and future G8x/G9x turn out from that point of view... Arguably, G80 might be significantly too ALU-light for next-generation games, but we'll see.
 
It might actually be for F.E.A.R., although I would rather classify it as stencil fillrate. One of the only things in which the G80's power didn't grow by ridiculous proportions is stencil, as far as I can see - so it makes sense for it to become a partial bottleneck there.
I thought the stencil fillrate was dramatically increased along with the z-test fillrate.
 
And exactly how do you reconcile that hypothesis with the fact that the GTS pulls far away from the X1950XT in many cases? The balance of shader power / fillrate is about equal when comparing the GTX to the GTS with a slightly higher emphasis on shading on the GTX.
 
lol, what situations? like real games? :LOL:

you did say g80 is "entirely fillrate limited". did you mean "entirely fillrate limited unless it's not fillrate limited" ?
 
I thought the stencil fillrate was dramatically increased along with the z-test fillrate.
I'd have to recheck, but iirc, it was only scaled by the number of ROPs. There is no such thing as "free 4x MSAA" for stencil, unless I'm mistaken. So sure, it was increased by a substantial amount, but less so than the z-only fillrate.

NVIDIA is probably thinking that next-gen engines won't make as heavy use of stencil as Doom 3 did, and the ROP design is likely something they want to keep for a while. That's my guess, at least. Also, you could argue one of the big reasons to increase the number of depth tests (CSAA) doesn't apply to stencil.

EDIT: And no offense, but your single-line replies with massive overconfidence in things you may not fully master can be slightly tiresome, icecold1983, heh...
 
Seriously, icecold1983, back up your statements with evidence. The one link you've posted so far doesn't even come close to indicating the conclusions you are spouting.
 
I'd have to recheck, but iirc, it was only scaled by the number of ROPs. There is no such thing as "free 4x MSAA" for stencil, unless I'm mistaken. So sure, it was increased by a substantial amount, but less so than the z-only fillrate.
Well, according to the B3D GPU charts, it is:
http://www.beyond3d.com/misc/chipcomp/?view=boarddetails&id=325

...though according to the Beyond3D GPU review, you're right:
http://www.beyond3d.com/reviews/nvidia/g80-arch/index.php?p=10

Edit: Though it's still a whopping 96 stencil tests per clock, so the G80 is no slouch in this area, either.
 
Back
Top