NVIDIA: Beyond G80...

But i agree that the "1 year from now" timetable is a little too tight for a G90, even with Jen-Hsun's comments about a load of codenames to come.

A random thought from a refrain oft repeated for certain other products: just because the G80 is a year late, what makes you think the G90 is similarly late? :devilish:

I don't think the situation is quite the same, but, it might not be quite as long a wait. The G71 was *supposed* to be the G80, after all....

I also would agree that the MUL->MAD enhancement has precedent, but, the MUL appears to be hiding in the interpolater, so, I'm not sure that makes sense. Also, raising issue-breadth risks decreasing maximum utilization, so it's a tradeoff between spending the trannies there or in another cluster.
 
Might be usefull:
Demand for Nvidia G80 chips weak as market expects G84 and G86 lineup
[...], the sources pointed out that Nvidia's forthcoming G84 and G86 GPUs for the entry-level and mid-range segments are likely to play a key role in the GPU market, with the two new GPUs expected to debut in the first quarter of 2007.

Although details of the G84 and G86 are still not available, Nvidia has completed the roadmap for the two chips, with local graphics card makers likely to receive product samples in January or February of next year, the sources stated
 
I'm not sure why they are associating weak demand for G80 with the upcoming entry and mid-level variants. They are targeting completely different markets. Weak demand would most certainly be related to the lack of compelling titles and the impending R600 launch. Many, many people are also quite happy with last generation performance and IQ in current games.
 
http://www.techreport.com/sympoll/polllist.php?dispid=120

Also, talk of G90 and so on seems to have missed out one factor: if G80 turns out to be a dud in comparison with R600, what kind of turn-around is NVidia going to have to do.

It's looking pretty certain, now, that R600 will have ~50% more GFLOPs (345 versus >500). Am I meant to believe that GFLOPs will not be important in the D3D10 generation?

Jawed
 
It's looking pretty certain, now, that R600 will have ~50% more GFLOPs (345 versus >500). Am I meant to believe that GFLOPs will not be important in the D3D10 generation?

Jawed

Where does that come from? Haven't seen any references about that.

Also, if it should turn out to have 50% more flops, nV can always fight it by pricing their parts lower. Should be easy by then with a few months into production of G80.
 
Since when is Nvidia just content to have to biggest bang for the buck? I'm sure they want to have the biggest bang period.

Jawed is probably discounting the MUL until it's found :)
 
It's looking pretty certain, now, that R600 will have ~50% more GFLOPs (345 versus >500). Am I meant to believe that GFLOPs will not be important in the D3D10 generation?

There's a limit to how much NV can accomplish. Assuming that G80 doesn't have huge redundancy built-in, it looks like a cluster costs 80M transistors. If you wanted your 80nm part size to ~= 90nm part, then you might be able to add two clusters. 25% increase....

If you went for the MUL->MAD upgrade, then you have to somehow feed an extra arg (maybe more, considering its MIA status) to the units. That bandwidth is going to cost, and it has to be carried on all 8 clusters. Probably a lot less than 160M trannies, though, and it might boost your numbers by a similar amount.

ROP upgrade plus clockspeed boost+GDDR4 plus ALU tweaking??

One other question -- given how well the 8800s overclocks, why the ban? Are they trying to leave room for a small incremental upgrade?
 
heh, I bet it'll be extremely amusing to see how wrong we all are once the actual roadmaps and/or products are out there. Or maybe not and we'll be mostly right, but I seriously doubt that, TBH. As for overclocking, you'd expect them to want to be able to release a small incremental bump with the 80nm shrink indeed, and they wouldn't want to repeat the G71 overclocking fiasco.

What's funny is that if NVIDIA decides to be very aggressive, they could manage what I described as G95 performance not in H2 2008, but rather in H1 2007! SLI is an interesting technology to have at your disposal indeed, and G80 scaling seems to be excellent, so we'll see what happens! :)


Uttar
 
How would I know? :p Better/faster support for 8x/16x MSAA maybe? Faster INT8 blending rates?


Uttar
 
heh, I bet it'll be extremely amusing to see how wrong we all are once the actual roadmaps and/or products are out there. Or maybe not and we'll be mostly right, but I seriously doubt that, TBH. As for overclocking, you'd expect them to want to be able to release a small incremental bump with the 80nm shrink indeed, and they wouldn't want to repeat the G71 overclocking fiasco.

What's funny is that if NVIDIA decides to be very aggressive, they could manage what I described as G95 performance not in H2 2008, but rather in H1 2007! SLI is an interesting technology to have at your disposal indeed, and G80 scaling seems to be excellent, so we'll see what happens! :)


Uttar

I'd dare to speculate that G8x as a base architecture (irrelevant of future codenames) will remain for quite a few years to follow. The rather dramatic changes in G8x compared to predecessors do not sound to me like a short term plan at all.

I haven't a single clue if Microsoft and IHVs are already negotiating for "DX-Uber-Next" (or "D3D11"-whatever). It sounds a bit too early to me right now, but when it starts it would be interesting to see what advancements are being proposed and what how it will shape out after countless of drafts and IHVs screaming in between about HW budgets. That future "deflection point" might be a field worth investigating for major architectural overhauls or maybe even not. Anything in between sounds rather like efficiency tweaks within the same base architecture.
 
How would I know? :p Better/faster support for 8x/16x MSAA maybe? Faster INT8 blending rates?


Uttar

Why increase the Multisampling sample densities and not further expand on coverage mask sampling instead? I had a probably dumb idea just a while ago while posting at 3DCenter; IF it would be possible to find a way to use N Transparency Multisampling (or call it alpha to coverage-whatever) for N coverage mask samples (irrelevant if like N/4 z/stencil are being stored), it would be more than just interesting.

Transparency SS or adaptive AA might give excellent results quality-wise, but if your scene is overloaded with alpha test textures, the gain over full scene SSAA isn't all that much after all. The unfortunate thing about Coverage sampling is that you get as much Transparency samples as Z/stencil samples are being stored.

Uhmmm kindly ignore the above if it's a stupid idea ;)
 
Back
Top