NVIDIA's Goldman Sachs Webcast, Summary/Transcript (info G71, G80, RSX, etc.)

_xxx_ said:
This tells you anything?

If (and I mean IF) this should be the case, than this MUST be a unified part.

Well, it must mean something anyway. But, yeah, two years (nearly three from today) does seem an awful long way to go without unifying.
 
_xxx_ said:
....This tells you anything? If (and I mean IF) this should be the case, than this MUST be a unified part.

Well, that doesn't tell me anything at all. nVidia will certainly tell you that NV3x was a "significant new architecture" when it launched.

I don't think Jawed is necessarily implying that G80 will be a complete and utter failure ala NV30. It's just that nVidia tends to seemingly "bolt on" new technology to their old one (NV3x vs. NV2x), and then gradually shift with new generations....whereas ATI tends to make larger whole-sale jumps in architeture at these inflection points. (R300 vs. R200), and then relatively minor tweaks over time until the next inflection.

nVidia's approach didn't work out well last time...but that doesn't mean the opposite can't happen this time.
 
I presume he's pointing at the idea that since we think they *are* on the road to hardware unification, that they wouldn't wait three years (from roughly today) to get there, two years after Vista release.

Maybe the bigger question tho is definition of terms re "hardware unification" (let's sidestep software methods for making it transparent to the API whether the hardware is unified or not, please, as it just will confuse the issue under discussion here) and if everyone really means the same thing when they point at it. I'm not convinced on that yet. Tho I do think there has been *some* shift in what NV means when they say it, I don't know that they mean what ATI means when ATI says it.
 
Dave Baumann said:

As I got it, that will be a requirement in near future. And doing it just on the driver level while still keeping the HW non-unified will not get them through 2-3 years, maybe the first wave would do it but they'd soon be losing on all fronts.

EDIT:
geo got it with his first sentence above :)
 
In this interview in July 2005 Kirk seemed to be talking down a Unified architecture. Whether that's because the interview was to promote RSX and his comments were to play down Xenos, I've no idea. Here's what he said last July:

David Kirk said:
"Well, let's get something straight. Microsoft makes APIs (Application Programming Interfaces- Ed) not hardware. WGF is a specification for an API specification - it's software, not hardware."

"For them, implementing Unified Shaders means a unified programming model. Since they don't build hardware, they're not saying anything about hardware.

"Debating unified against separate shader architecture is not really the important question. The strategy is simply to make the vertex and pixel pipelines go fast. The tactic is how you build an architecture to execute that strategy. We're just trying to work out what is the most efficient way.

"We will do a unified architecture in hardware when it makes sense. When it's possible to make the hardware work faster unified, then of course we will. It will be easier to build in the future, but for the meantime, there's plenty of mileage left in this architecture."
 
And earlier than that, Dec 2004, http://www.extremetech.com/article2/0,1697,1745060,00.asp

Question: Are GPU architectures and Direct3D evolving toward a design where the distinction between vertex and pixel shaders essentially goes away?—davesalvator

David Kirk: For hardware architecture, I think that's an implementation detail, not a feature.

For sure, the distinction between the programming models and instruction sets of vertex shaders and pixel shaders should go away. It would be soooo nice for developers to be able to program to a single instruction set for both. As to whether the architectures for vertex and pixel processors should be the same, it's a good question, and time will tell the answer. It's not clear to me that an architecture for a good, efficient, and fast vertex shader is the same as the architecture for a good and fast pixel shader. A pixel shader would need far, far more texture math performance and read bandwidth than an optimized vertex shader. So, if you used that pixel shader to do vertex shading, most of the hardware would be idle, most of the time. Which is better—a lean and mean optimized vertex shader and a lean and mean optimized pixel shader or two less-efficient hybrid shaders? There is an old saying: "Jack of all trades, master of none."

But all this is the old stuff. I was under the impression we were coming to the conclusion they've since shifted? Tho, to be honest, I'm not recalling what that conclusion is based on --anything in the public record, or sidebars only to this point?
 
geo said:
But all this is the old stuff. I was under the impression we were coming to the conclusion they've since shifted? Tho, to be honest, I'm not recalling what that conclusion is based on --anything in the public record, or sidebars only to this point?

Yes, I think the impression is that they have shifted / softened on the hardware unification issue. Though it doesn't seem like the change of heart would have been in time for '06 release of parts.

The general concensus has been pointing to G80 not being unified (while still supporting D3d10 of course), but nvidia's next major architecture would be unified. That might be G100 though...
 
Joe DeFuria said:
Yes, I think the impression is that they have shifted / softened on the hardware unification issue. Though it doesn't seem like the change of heart would have been in time for '06 release of parts.

The general concensus has been pointing to G80 not being unified (while still supporting D3d10 of course), but nvidia's next major architecture would be unified. That might be G100 though...

Ah, okay. Well, I think that's the bone we're chewing on then. . .it sounds like from their own description that if they don't do hardware unification in G80 we could be looking at Spring 2009 before NV has a whole family of such parts (maybe Fall '08 for the top-end). I suspect some people think they can't mean that, so they must mean the other (that G80 *is* unified in hardware, whatever that means to NV).

Edit: And to attempt to complete my mentalist trick, I think what Jawed is saying is, "Oh yes, they do mean they won't have a family of hardware unified parts until early 2009. But around about early 2007 they are going to go 'oh f**k!' and change their minds."
 
Last edited by a moderator:
Sunrise said:
Try to read between the lines:

" => So far, 90nm progress is very good, at both UMC and TSMC. Yields are good. Good job done by the fabs.
=> Have to work hard to transition. Yields are good, Costs are good, so nothing in that that could hurt our outlook."
NVidia has a startling history of gilding the lily when it comes to these "communications".

I seriously think NVidia's 90nm GPUs were supposed to arrive in desktop form last summer - slightly ahead of the time that the laptop (slower, perhaps less quads) parts were supposed to turn up.

So everything else NVidia says with respect to 90nm, being so late, is to make it seem like everything is right on track.

Jawed
 
trinibwoy said:
With absolutely no information whatsoever on which to base such an opinion I'm wondering what is guiding you toward this doom-and-gloom outlook you've had re-Nvidia as of late.
There's a concensus that NVidia's not going Unified with G80.

The more you look at what's planned for D3D10, the more of a mistake non-unified looks. Both because of the intensive non-pixel shader coding styles it allows, and because of significant die area savings that unification brings.

Xenos means that developers will have been working with a unified architecture for a year+ by the time Vista arrives. That work will inform the devs' development attitudes towards Vista as well as ATI's efforts in promoting all that SM4 goodness.

(Presuming that's it's actually called SM4 - have to admit I've not heard a peep of it so far.)

Jawed
 
Jawed, what do you make of this thread and the NV patent in it, and Uttar's theories?

http://www.beyond3d.com/forum/showthread.php?t=25836

I saw you on the thread, but mostly addressing the ATI side of the ball. Does the NV patent from July '05 do much for you re hardware unification? Or are you assuming it is already in G7x?

Edit: Scratch Uttar's theories! He was channelling some other IHV entirely! :)
 
Last edited by a moderator:
_xxx_ said:
This tells you anything?

If (and I mean IF) this should be the case, than this MUST be a unified part.
"New" simply means there'll be things like:
  • geometry shader (distinct programmable unit as per at least one patent)
  • streamout (only required from the vertex shader unit)
  • decoupled texturing unit (prolly shared by GS, VS and PS)
Jawed
 
Jawed said:
NVidia has a startling history of gilding the lily when it comes to these "communications".

I seriously think NVidia's 90nm GPUs were supposed to arrive in desktop form last summer - slightly ahead of the time that the laptop (slower, perhaps less quads) parts were supposed to turn up.

So everything else NVidia says with respect to 90nm, being so late, is to make it seem like everything is right on track.
Every IHV has a "past history" of bending the truth somewhat, but that´s just the way these webcasts are done. That´s perfectly normal in respect to what they want to tell their investors. Actually they don´t want to know the truth, they just want to hear "good things", so it´s quite logical that "the truth" is not what they get.

What I don´t understand is how you´ve come to the above conclusion, since "Have to work hard to transition" should tell you the exact opposite of what you just said. They´re certainly not right on track and that´s also what i got from that particular interview. Actually, you don´t need to read that much into it, since by saying that they kind of admit they are late, but that´s just my way of reading those statements.

However, just because they are late, doesn´t mean that G71 could´ve been introduced last summer. Actually i find that is a little far-fetched, when taking into account what NV needed to do with their underlying G70-architecture. That´s like saying RSX could´ve been completed at the same time with yields that had to be mass production ready. Process maturity, yields, work involved in porting a fully non-low-k developed ASIC to low-k, shrinking it, re-tooling, library adjustments, potentially several respins to improve yields / margins, to name a few. These "facts" actually tell me the exact opposite.

So, summer ? I wouldn´t be so sure about that.
 
Last edited by a moderator:
geo said:
I presume he's pointing at the idea that since we think they *are* on the road to hardware unification, that they wouldn't wait three years (from roughly today) to get there, two years after Vista release.
Unified architecture is a stunningly different architecture from the prior generations. It could easily take 3 years from start to product.

ATI has been at hardware unification since 2002 (maybe earlier) and prolly promoting the idea of software unification (to M$ as part of DX) since before that.

Maybe the bigger question tho is definition of terms re "hardware unification" (let's sidestep software methods for making it transparent to the API whether the hardware is unified or not, please, as it just will confuse the issue under discussion here) and if everyone really means the same thing when they point at it. I'm not convinced on that yet. Tho I do think there has been *some* shift in what NV means when they say it, I don't know that they mean what ATI means when ATI says it.
There's only one definition of unification in hardware that I'm aware of:
  • a shader unit can execute arithmetic/branch/texture operations on a primitive (e.g. triangle, if geometry shading is supported) or a vertex or a pixel
I'd be interested in alternative definitions.

Jawed
 
geo said:
Edit: And to attempt to complete my mentalist trick, I think what Jawed is saying is, "Oh yes, they do mean they won't have a family of hardware unified parts until early 2009. But around about early 2007 they are going to go 'oh f**k!' and change their minds."
Sort of.

Except they've already had their "oh fuck!" moment. Hence the "softening".

Jawed
 
geo said:
Jawed, what do you make of this thread and the NV patent in it, and Uttar's theories?

http://www.beyond3d.com/forum/showthread.php?t=25836

I saw you on the thread, but mostly addressing the ATI side of the ball. Does the NV patent from July '05 do much for you re hardware unification? Or are you assuming it is already in G7x?

Edit: Scratch Uttar's theories! He was channelling some other IHV entirely! :)
That's merely out of order multi-threading - sort of like Itanic (erm, I think...) - sorry, Itanium.

Jawed
 
Jawed, most of your conclusions are highly speculative. Your assertion that Nvidia's 90nm parts should have surfaced last summer is a bit ridiculous since that would mean their entire 110nm roadmap for G7x was redundant.

Also, why are you so certain that DX10 workloads will favor hardware unification and that such unification will usher in higher performance/mm² ?
 
Jawed said:
That's merely out of order multi-threading - sort of like Itanic (erm, I think...) - sorry, Itanium.

Jawed

Are you intentionally not acknowledging the many references to scheduling of vertex and pixel workloads in several recent Nvidia patents?
 
Sunrise said:
However, just because they are late, doesn´t mean that G71 could´ve been introduced last summer.
The laptop version of G72 appeared last autumn.

I've never tried to suggest that G71 was due in the summer - I've always pegged it for ~November - with the lower end parts hitting over the preceding months.

Actually i find that is a little far-fetched, when taking into account what NV needed to do with their underlying G70-architecture. That´s like saying RSX could´ve been completed at the same time with yields that had to be mass production ready. Process maturity, yields, work involved in porting a fully non-low-k developed ASIC to low-k, shrinking it, re-tooling, library adjustments, potentially several respins to improve yields / margins, to name a few. These "facts" actually tell me the exact opposite.
Actually I agree - and RSX prolly complicated things somewhat. RSX has ended-up at TSMC because NVidia just couldn't get it working at Sony - that's my theory.

I've always maintained that 90nm would be more difficult for NVidia than ATI. And that's how it's turned out - the point is they've made a lot of noise to the contrary. The only reason NVidia didn't get clobbered badly in 2005 is because of ATI's library problem. And the only reason a few thousand 7800GTX-512s appeared is because NVidia's plans for 90nm G71 went askew.

Jawed
 
Back
Top