NVidia's D: V, M and E

Jawed

Legend
GPU code names have gotten a bit more rational:

http://www.fudzilla.com/index.php?option=com_content&task=view&id=3197&Itemid=1

These code letters make life a bit easier (to discuss around here), don't they? Desktop: value, mainstream and enthusiast.

OK, so the total code is a bit obscure, simply referring to 8 series (the GPUs described in the article are actually "refresh 8" series GPUs). But I like them.

So, who are these codes for?

Are they going to be fine-grained enough for our purposes?

Will we be confused when an M GPU becomes the refresh-generation's V GPU, but retains its M name?...

Jawed
 
G92 has caused problems ("is that a refresh of G80?").

G9x has also caused problems ("Are these refreshed G8x GPUs or ...?")

These codenames, if they're real (I'm not willing to assume they're real, yet) seem like a rationalisation. Of course there's room for confusion when a naming change takes place - and you can argue it's an opportunity to obfuscate (like G70 was, from NV4x).

Jawed
 
I'm not sure those are codename *changes*, but I could be wrong. G84 and G86 already had similar D8x names - but those aren't the ones used inside the company, except perhaps in the sales teams, I'd presume.

Furthermore, I'm sadly not even sure that one Dnx name matches one chip. It could be that one chip might have several such names, depending on the SKU configuration or the bus width. Once again, I'm not sure at all about this.

Anyway I said it and I'll say it again: the question for G92 and G9x in general is what kind of memory will be used. G84/G86 wouldn't benefit much from more processing power with similar amounts of bandwidth. And since G92 seems to have a narrower bus than G80, it'd need faster memory to compete in the enthusiast market too. We'll see...
 
I'm not sure those are codename *changes*, but I could be wrong. G84 and G86 already had similar D8x names - but those aren't the ones used inside the company, except perhaps in the sales teams, I'd presume.
I've certainly got my suspicions that that's all they could be.

Furthermore, I'm sadly not even sure that one Dnx name matches one chip. It could be that one chip might have several such names, depending on the SKU configuration or the bus width. Once again, I'm not sure at all about this.
Shame.

Anyway I said it and I'll say it again: the question for G92 and G9x in general is what kind of memory will be used. G84/G86 wouldn't benefit much from more processing power with similar amounts of bandwidth. And since G92 seems to have a narrower bus than G80, it'd need faster memory to compete in the enthusiast market too. We'll see...
That's stuff for a different thread. Though I suspect G84/86 are both ALU-bottlenecked: too-low ALU:TEX ratio.

Jawed
 
The most interesting news in all this is that the enthusiast part is being pushed off until next year. If you believe Fuad, that is.
 
The most interesting news in all this is that the enthusiast part is being pushed off until next year. If you believe Fuad, that is.

It's a glimpse of the future if we only have one company competing in the high end. Why bother to push anything new out if you can keep selling your current tech at good margins and with no competiton? It would be stagnation just like Creative sound cards.
 
It's a glimpse of the future if we only have one company competing in the high end. Why bother to push anything new out if you can keep selling your current tech at good margins and with no competiton? It would be stagnation just like Creative sound cards.

Something I've been saying for a long time now, in both CPU & GPU markets. It sucks for everyone but the company on top, including consumers.
 
It's a glimpse of the future if we only have one company competing in the high end. Why bother to push anything new out if you can keep selling your current tech at good margins and with no competiton? It would be stagnation just like Creative sound cards.

And that is bad in what way? I still don't see any better consumer sound cards around after all these years.
 
Even without the need to bring out something faster you might hope that nvidia would be tempted by something on a smaller process that would provide better margins and possibly have the benefit of power reduction and perhaps better speed for the end user.
 
I think that's his point. No competition leading to stagnation of the technology.

What stagnation? Last time I took a look, X-Fi was a great piece of hardware, very usable for recording with stable ASIO2 driver and kicking serious arse in Windows environment. I can't see why that is supposed to be "bad" in any way. If anyone made a better product, people would buy it. But so far, nothing happening. Only VIA had a nice try with Envy24, pity they didn't bother pushing it further.
 
There will always be a need for something faster, just not something faster as soon as it could have been.

Intel especially massive fabs that are massive money drains. If they are underutilized, the company loses a massive amount of money.

If the rate of processor improvement drops to the point that customers defer replacing their old machines, Intel makes less money than if it continued to compete against its old products.

Intel might spread out its tic toc strategy a bit, if it wants to get a better ROI on its processes, but allowing the gap between its processes and the foundries can lead to future problems.

Nvidia doesn't have fabs, so there's not the immediate pressure of underutilization charges, but it still has to sell products.

It seems that waiting on Microsoft to change DX revisions to create the need for new products is not a sure thing.
 
What stagnation? Last time I took a look, X-Fi was a great piece of hardware, very usable for recording with stable ASIO2 driver and kicking serious arse in Windows environment. I can't see why that is supposed to be "bad" in any way. If anyone made a better product, people would buy it. But so far, nothing happening. Only VIA had a nice try with Envy24, pity they didn't bother pushing it further.

Was or is?

Personally I perfer the AuzenTech cards with the C-media chips.
 
Was or is?

Personally I perfer the AuzenTech cards with the C-media chips.

Those in a completely different ballpark as far as performance goes (a way lower one). Not even comparable, let alone the whole ASIO thingy and so on.

Davros: numbers mean nothing, but the X-Fi is way better than Audigy (or any other card out there as of now) in every regard.

Btw, I use integrated audio right now and have no Creative card, but it is a nice piece of HW regardless
 
To each his own. Personally I perfer sound quality/stable drivers over a couple extra frames a second. Besides, I don't recall Creative supporting Dolby digital Live or DTS.
 
Back
Top