So... what will G80/R600 be like?

DegustatoR said:
Wasn't Kirk talking about now? As if unified approach isn't optimal for now? When did he say this anyway? A year ago?

I think it's possible that he meant DX9 SM3 generation.

This doesn't pass the giggle test for me tho. Context is all, and that context would be "duh!". Everybody and his brother knows you are talking about Vista and dx10 for that conversation.

I'd find intentional misdirection an easier to swallow answer than "oh, you didn't mean SM3?".
 
geo said:
This doesn't pass the giggle test for me tho. Context is all, and that context would be "duh!". Everybody and his brother knows you are talking about Vista and dx10 for that conversation.

I'd find intentional misdirection an easier to swallow answer than "oh, you didn't mean SM3?".

Well the question bit-tech asked before the Kirk quote I posted was, "So what about the future?"

I guess you could question how far in the future he took that to mean, but I would think he meant beyond G7x.
 
You could argue that Kirks statement was more about RSX vs. Xenos than upcoming PC products though... (not that I have any idea of what they are going to end up producing)
 
psurge said:
You could argue that Kirks statement was more about RSX vs. Xenos than upcoming PC products though... (not that I have any idea of what they are going to end up producing)
I agree with you on that 100%
 
Dave Baumann said:
I think this is the one pertinent to future architectures.

I find it hard to believe that Kirk is unaware that texture units may be decoupled from pixel shading ALUs and that while a pixel ALU may be doing vertex work, it's texture units won't neccessarily be "idle".

I could believe that specialized geometry shading units would still be more efficient than reusing pixel shader ALUs due to memory access pattern differences.
 
obobski said:
ever wonder what nVidia did/does with all the 3dfx tech they now own?

Threw it away since it's 5 year old tech now? Everything from 3dfx IP is pretty outdated nowadays.
 
_xxx_ said:
Threw it away since it's 5 year old tech now? Everything from 3dfx IP is pretty outdated nowadays.

I think the real question is - did they ever really use it (okay, they did, at least IIRC it was mentioned somewhere that they used some video engine they got on the 3dfx tech asset buyout) - or did they ever really started any research for future based on 3dfx tech
 
Dave Baumann said:
Well, I can't ever recall NV's PR making any comment about it. The commentry and reaction has stemmed from Kirk and he's not divorced from engineering.
No he isn't but comments by him for the public are basically NV PR-approved. Or, in better words, has to be.

As for the original topic, these two products will be what Vista wants. Know that (with finality) and we have the answer :) . I will start another thread regarding an interesting related and side topic, that of unified shaders (I have a lot to say about this!).
 
Kaotik said:
I think the real question is - did they ever really use it (okay, they did, at least IIRC it was mentioned somewhere that they used some video engine they got on the 3dfx tech asset buyout) - or did they ever really started any research for future based on 3dfx tech
Who cares. Compare then (in 3dfx's time) and now, those tech are basically outdated (by virtue of other technologies beyond those in 3D).

And you're not talking about Gigapixel are you? Because that isn't "3dfx tech" strictly speaking.
 
Reverend said:
I will start another thread regarding an interesting related and side topic, that of unified shaders (I have a lot to say about this!).
I hope this will include the differentiation between a unified shader API (software) and a unified shader architecture (hardware). I have been itching to have this clarified to me.
 
Nvidia has to shrink the die. ATI already took the hit and it was nearly a total disaster with the R520. This cant be an easy thing so I'm guessing Nvidia will/could have similar problems.
 
According PS3 public roadmap at this time NVIDIA has already produced a high end GPU using a 90 nm process (from a fab they never used before).
 
DemoCoder said:
I don't believe in a late-stage alteration of the G80 to be unified.

What makes you think that G80 is the same G80 that was there a few months ago? NV already switched names before, that means nothing IMHO.

Just saying, what we will eventually know as G80 may as well have been, dunno, G87.5 untill yesterday.
 
Dave Baumann said:
I think this is the one pertinent to future architectures.
The key thing, I think, is that he "bottles-up" a shader engine (pixel or vertex) with the features associated with either, e.g. rasterisation, interpolation.

If you separate these kinds of components and keep the shader engine "pure" as just an "ALU engine" then his argument falls on its face.

Well, that's how it seems to me. The block diagram for Xenos seems to back this up.

012l.jpg


The PowerVR SGX also makes a mockery of his stance.

Jawed
 
DemoCoder said:
I find it hard to believe that Kirk is unaware that texture units may be decoupled from pixel shading ALUs and that while a pixel ALU may be doing vertex work, it's texture units won't neccessarily be "idle".

I could believe that specialized geometry shading units would still be more efficient than reusing pixel shader ALUs due to memory access pattern differences.
I think NVidia's backwardness on this whole topic stems from a lack of foresight on scheduling.

The Xenos scheduler is not a trivial bit of gear. But with it, the whole USA falls into place. Without it, it just looks like a minefield.

The argument over whether a GS should be a dedicated piece of hardware or can re-use the shader engine (e.g. in collaboration with a primitive assembler) is certainly ripe for discussion...

Jawed
 
Jawed said:
I think NVidia's backwardness on this whole topic stems from a lack of foresight on scheduling.

The Xenos scheduler is not a trivial bit of gear. But with it, the whole USA falls into place. Without it, it just looks like a minefield.

The argument over whether a GS should be a dedicated piece of hardware or can re-use the shader engine (e.g. in collaboration with a primitive assembler) is certainly ripe for discussion...

Jawed
Oop... it's a nice short wording :cool: ... USA... Unified Shader Architectures, not United State of America. I knew now that ATi think big enough :eek:.
Sorry for OT.
 
Jawed said:
I think NVidia's backwardness on this whole topic stems from a lack of foresight on scheduling.

I find that very hard to believe. The "backwardness on the whole topic," that is. A unified shader architecture is quite logical. It is not so much a performance enhancer as maintaining scaling feasibility. You can bet your last quid that scheduling would be the hot topic. I think scheduling is the hot topic even without a USA.

Scheduling on graphics hardware is an obvious place for improvement as programmability grows. However, it's not like these boys are sailing into unchartered territory. They have decades of research in the CPU field to draw from. I find it very unlikely that they will come up with something startlingly new.

Perhaps your love for Xenos/C1 has blinded you?
 
Last edited by a moderator:
Don't worry, there's no need for a new smiley, we already knew you're in love ;)
 
Back
Top