The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
2nrlc2q.jpg


I know CJ described it before, but I'll be damned if that isn't the most ghetto thing I've ever seen in GPU technology...Even more-so than the dreaded dongle, and that was ghetto.

FrankenSLI! :LOL:

But really, who's going to see it after the few seconds of rolling your eyes on the initial install?

Tho to tell the truth, I'd be a little worried about the life span of the one stretched the furthest there. It doesn't look real comfortable.
 
Because it doesn't need to be on Monday ;)

Remember that nV generally always relesed a new high-end about twice as fast as the previous gen? That's exactly what you'll see soon and I'm talking single chip solution. Mark my words :)

Hmm, and here I don't remember a single time when Nvidia's new High End doubled the performance of the previous High End until the 5900 -> 6800 transition. Up til then, you'd be luck to see a 25-30% increase in performance compared to the previous high end.

And since then 6xxx -> 7xxx -> 8xxx hasn't exactly been a doubling of performance each time. Although in certain benchmarks with certain settings you could get a doubling or close to it.

I'm expecting good things from them certainly, but I'll be extremely surprised if they double the performance of the 8800 GTX without going to a GX2 style card and then only in certain cherry picked SLI friendly benchmarks.

Regards,
SB
 
Trilinear rate bench wanted!


What do you expect? 8TA/16TF per Cluster? :LOL:

2900guy said:
that sounds like something negative....
No, to have more texture-adresses is better.

I'm expecting good things from them certainly, but I'll be extremely surprised if they double the performance of the 8800 GTX without going to a GX2 style card and then only in certain cherry picked SLI friendly benchmarks.
128SPs@2.4GHz with better usage of MUL for general shading could give this factor 2 over a 8800GTX, I would think.
Also memory-clock is supposed to rised up much: 1.4GHz+.
This could end in a nice perfromance, through synergy-effects who I also observed at GPUs.
 
How else would you explain the 24+GTexels/s MT fillrate measured above?
That's interesting certainly. I have to wonder though why? Must be cheap to implement - from the benches so far there doesn't really seem to be a benefit (of course, not counting the fill rate test itself). Compared to the competition even only 28 of the old units would look like plenty (ok so R600 and presumably RV670 can actually process 32 texture addresses per clock (and at a higher clock than G80/G92), but can only filter 16). 56 just looks ludicrous - and obviously, like the 8600 cards, it can't reach its theoretical throughput in the fill rate test (would have been 33.6MTex/s).

Good opportunity for brilinear optimizations - not sure though it would really help that much.
 
i knew there was something up with texel fillrate after seeing that chineese review. i knew getting numbers like that with just 16 tmu would be impossible. it also explains how the gap between the gtx and gt increases as resolution goes up as the gt is obviously pixel fill rate limited at those high res. now dont understand, u guys are saying it has 56 tmu but cant reach this level for fill rate because it only filters 28?
 
I say, about 75% efficiency with multi-texturing on G92.... and even lower for the full 128sp part (64 textures).
 
Last edited by a moderator:
Shack: Do you have any insight to how well the upcoming range of cards will support Crysis, not just on the high end but lower down the ladder as well?

Cevat Yerli: Very, very well. Stay tuned for more on this. In mid November you will see the new NVidia cards. They are a blast for Crysis and really, really very good deals.
Link

Looks like Cevit eat too much gyros, or mean the new GTS.

R.I.P. IHV independent game developing.
 
oh i see. now can u please explain to me why the g92 suffers from the relatively low efficiency?

Bandwidth limitation or the shaders are not fast enough to feed this 56 texture-adressers with coordinates.
But since nobody would play on this card without trilinear or 2xbi-AF filtering it is irrelevant and TAs could used for some future stuff like vertex-texture-fetches.
 
Actually it fairs quite well, compared to G84, as the former one reaches just 61% of its peak multi-texturing rate. Considering its [G84] rather low shader/base clock ratio, there's no question of why G92--using the G84 TCP tech--reaches higher relative texturing rates.
 
ok so let me see if i get this. the g92 has 56 tmu, but due to something which i dont understand exactly, its only performing at around 75% of its theoretical fill rate, where as the G80 is about 95% efficient. so something is preventing the g92 to reach same efficiency as a G80. can u explain what this "something" is? also, did nvidia do this on purpose as they didnt want the 8800 gt to get any closer to the gtx than it already is? or because of manufacturing problems?
 
Status
Not open for further replies.
Back
Top