Nvidia GT300 core: Speculation

Status
Not open for further replies.
http://techreport.com/discussions.x/16036

Apparently, Nvidia has updated its terminology somewhat: CUDA now refers solely to the architecture that lets Nvidia GPUs run general-purpose apps, and the programming language Nvidia has been pushing is now known as C for CUDA. Hegde made it clear that CUDA is meant to support many languages and APIs, from OpenCL and DirectX 11 Compute Shaders to Fortran.

Again, most people here seem to equate CUDA as proprietary, what a laugh. Ignorance is bliss, I guess.
 
Well, ATi seems to have had considerably less focus on GPGPU than nVidia in the past few years.

If you rethink your entire post, then ATI's lack of focus is mostly in pushing their own API for GPGPU as much as NVIDIA. I was more thinking of the hardware side of things and while some might disagree IMHLO ATI has a better balance from both worlds up to now, if one also bothers to look at the perf/mm2 side of things.

I certainly didn't mean to say that nVidia's GPUs suck in gaming. Rather that graphics-wise, nVidia hasn't really done anything since the G80. They didn't even bother to add DX10.1 functionality.
Both sides GT200/RV770 are merely refreshed refreshes of the tired old 2006 G80/RV770 architectures. ATI had a lot of headroom for improvements if you consider the 2900XT/8800GTX differences of the past.

As for not supporting D3D10.1 I'm not sure but one rumour has it that they would have needed to revamp their TMUs for that. If that should be true, then the cost of redesigning an entire unit might be too high for a minor update like 10.1. That of course is no real excuse for failing to support it. NV is not absent by all means when Microsoft and all other IHVs vote for what is to get included in each API version and they know a long time before what is going to come.

Other than that, the main focus of nVidia has been on software... most notably Cuda and PhysX.
It seems like NV's goal here is early penetration and support for anything related. This doesn't mean that NV isn't optimizing for OpenCL and or D3D11 already. For AMD/ATI its of course a lot cheaper overall to wait for the latter two to yield a wider penetration. CUDA development from NV's side, supporting it with ISVs/developers and initiatives like GPU Ventures should cost NV quite a bit in resources.

MfA,

Don't bother with the noise ;)

***edit: albeit tragically OT NV's CP doesn't use .NET afaik. That doesn't mean that I like it, rather the contrary. If I wouldn't miss in functionality I'd revert back to the classic one.

Lukfi,

What I don't understand is why is .NET such an obstacle. Sure, you have one more thing to install, but maybe you'd need to do that anyway because of some other app. (That goes to everyone here.)

If you want to have to deal with really annoying problems with drivers that are connected to NET usage, check HP's driver packages for printers and the likes. I haven't seen any serious problems with ATI's drivers in that regard recently.
 
Last edited by a moderator:
What's going on in this thread? 8)

Since when is DX10.1 obsolete?
Well you can look at it like this: once SM5 hardware hit the street no one will use DX10.1 anymore, the main two targets will probably be SM3 (console and PC lower class hardware) and SM5 (PC higher class hardware). So that makes SM4.1 (and SM4 too) kinda "obsolete", no?

DX11 hardware will, DX10.1 hardware is.
DX11 hardware will support much more than DX10.1 hardware. It's much more interesting to use DX11 hardware right from the start.

I definitely dont see how 10.1 is obsolete. In fact I think Nvidia should pull the thumb out of their rears and get to it seeing that 10.1 DOES indeed provide quite a nice fps boost!
Which mostly comes from reading depth from shader which can be done on G8x+ GPUs and is done in FC2 for example. So what's so important in supporting 10.1 for NVIDIA if they already support the only feature that improves performance? 8)

Game with Physics acelerated on the GPU? None.
That was funny, thanks =)

Think better. ATI support DX_10.1 + tessellation. Both will be present in DX_11, so:
1+1=2
ATI already support some DX_11 features, so probably will see a boost in performance by DX_11 times.
G7x supported some DX10 features. How's that boosted their performance by DX10 times?
You either DX11 compatible or not. <R8x0 won't be DX11 compatible and that means that you won't see any changes on these GPUs when DX11 will come. Plus DX11 is about expanding DX10 featureset, not improving the performance of DX10 features. So even if (and that's a big IF) any DX10 GPU would somehow work with DX11 features (i'm not talking about DXCS which are known to be working on DX10 class HW in the same way CUDA has several compute device targets) that probably won't mean anything for performance of todays DX9/10 engines.

Really from an end user perspective there is very very little differentiating ATI's and NV's products atm. This is partly why we see NV pushing physx and CUDA so heavily.. they are in dire need of setting their products apart from ATI's.
Here's an interesting thought for all of you: NV will get DX10.1 support in G3xx line and will use all this DX10.1 features which AMD is promoting now automatically.
But AMD on the other hand probably won't get PhysX GPU acceleration support in their products because they think that it's unwise for them to support NVs technology.
So who's in better position wrt features supported from this point of view? 8)
(Although I do hope that NV will port PhysX to OpenCL at some point, but even that doesn't mean that todays AMD GPUs will be able to handle that port.)

So anyway... Anything new about GT300? 8)
 
If you rethink your entire post, then ATI's lack of focus is mostly in pushing their own API for GPGPU as much as NVIDIA. I was more thinking of the hardware side of things and while some might disagree IMHLO ATI has a better balance from both worlds up to now, if one also bothers to look at the perf/mm2 side of things.

I was thinking about the hardware side aswell. As far as I know, the HD4000 series will be the first to support OpenCL, Havok physics, Avivo and all other sorts of GPGPU, whereas nVidia supports OpenCL, PhysX, Badaboom, and other Cuda-related technologies on the entire productline from the 8000 series and up.
That's more than a 2 years headstart in terms of hardware capabilities.

Perf/mm2 doesn't mean anything to an end-user such as myself.
What I care about are things like price, performance level, power consumption, heat and noise.
If anything, nVidia has proven that the size of a chip has little to do with any of these factors, as far as the end-user is concerned.

As for not supporting D3D10.1 I'm not sure but one rumour has it that they would have needed to revamp their TMUs for that. If that should be true, then the cost of redesigning an entire unit might be too high for a minor update like 10.1.

Thing is, their DX10-level hardware has been around for about two-and-a-half years now. Normally a GPU architecture would have seen a major revision (major enough to add support for something like DX10.1), or would have been superceded by a new architecture altogether.

It seems like NV's goal here is early penetration and support for anything related. This doesn't mean that NV isn't optimizing for OpenCL and or D3D11 already.

The irony of it all is that nVidia basically set the standard for GPGPU with Cuda, and both OpenCL and D3D11 seem to borrow heavily from Cuda, so nVidia doesn't need to do all that much. In a way they are already custom-made to fit nVidia's architecture, just like Cuda.
I wouldn't be surprised if ATi needs to redesign their GPU to match nVidia's performance in OpenCL (if I understood correctly, they already had to add a shared cache much like the G80 one to the HD4000 architecture, to get OpenCL supported at all).

CUDA development from NV's side, supporting it with ISVs/developers and initiatives like GPU Ventures should cost NV quite a bit in resources.

It may already be worth it.
 
I was thinking about the hardware side aswell. As far as I know, the HD4000 series will be the first to support OpenCL, Havok physics, Avivo and all other sorts of GPGPU, whereas nVidia supports OpenCL, PhysX, Badaboom, and other Cuda-related technologies on the entire productline from the 8000 series and up.
That's more than a 2 years headstart in terms of hardware capabilities.

I'm not in the clear what exactly AMD intends to support in terms of past hw but there's no particular reason why they couldn't have GPGPU support for R6x0 and RV6x0 either.

Perf/mm2 doesn't mean anything to an end-user such as myself.
What I care about are things like price, performance level, power consumption, heat and noise.
If anything, nVidia has proven that the size of a chip has little to do with any of these factors, as far as the end-user is concerned.

In pure theory GT200 could have ended up smaller with the same gaming performance, lower power consumption, heat and noise if they wouldn't have invested as much in added logic for GPGPU.

Thing is, their DX10-level hardware has been around for about two-and-a-half years now. Normally a GPU architecture would have seen a major revision (major enough to add support for something like DX10.1), or would have been superceded by a new architecture altogether.

As I said there's no real excuse for NV for not supporting D3D10.1. Apart from that neither IHV has presented a new architecture, but rather refreshes.

The irony of it all is that nVidia basically set the standard for GPGPU with Cuda, and both OpenCL and D3D11 seem to borrow heavily from Cuda, so nVidia doesn't need to do all that much.

Alas if OpenCL or D3D11 would "borrow heavily" from CUDA as you say; then the all the markets from small form factor to PC desktop (or even high end professional markets) would have to use G80-shenaningans. There's a multitude of IHVs and ISVs and even universities that set standards like OpenGL and NV isn't by far the only party involved or some kind of trend setter here:

http://www.khronos.org/about/

The black outline in the center shows the board of promoters of the Khronos Group and here are their benefits listed:

http://www.khronos.org/members/benefits/

In a way they are already custom-made to fit nVidia's architecture, just like Cuda.

NV isn't alone in the GPU markets. OpenCL started as an Apple initiative.

I wouldn't be surprised if ATi needs to redesign their GPU to match nVidia's performance in OpenCL (if I understood correctly, they already had to add a shared cache much like the G80 one to the HD4000 architecture, to get OpenCL supported at all).

If that should be true it sounds weird at least.

It may already be worth it.

No one said it isn't worth it. But when it comes to GPGPU, early adoption is essential for NV not because of AMD alone but because of Intel's future Larabee also.
 
I'm not in the clear what exactly AMD intends to support in terms of past hw but there's no particular reason why they couldn't have GPGPU support for R6x0 and RV6x0 either.

There's a difference between GPGPU support and OpenCL though.
ATi already supported an early GPGPU API back in the X1000-days.
But I don't think anyone expects the X1000 to get OpenCL support, because the hardware just isn't up to it.

In pure theory GT200 could have ended up smaller with the same gaming performance, lower power consumption, heat and noise if they wouldn't have invested as much in added logic for GPGPU.

Well, that was my point: nVidia seems to have focused mainly on GPGPU in the past few years.
They're not doing badly either, because gaming performance, power consumption, heat and noise are still competitive with ATi. nVidia just has the added bonus of having a mature GPGPU API and an ever-growing collection of GPGPU software.

Alas if OpenCL or D3D11 would "borrow heavily" from CUDA as you say; then the all the markets from small form factor to PC desktop (or even high end professional markets) would have to use G80-shenaningans. There's a multitude of IHVs and ISVs and even universities that set standards like OpenGL and NV isn't by far the only party involved or some kind of trend setter here:

Alas? I don't see your point?
What's bad about G80 anyway? Heck, I have a Dell Laptop with a Cuda-capable Quadro GPU, so small-form-factor is absolutely no problem.

Aside from that, I never said NV is the only party involved. I just said that the OpenCL and DX11 standards seem to have copied a lot of things from Cuda.
This is not uncommon. OpenGL also copied heavily from DirectX when introducing things like GLSL (and DX's HLSL in turn was very similar to NV's Cg). Does that mean that Microsoft (or nVidia) controls OpenGL? Not at all.
Just that there was already a good solution with good hardware support, so it was a good starting point for a new standard.

NV isn't alone in the GPU markets. OpenCL started as an Apple initiative.

Apple is dependent on NV and ATi hardware.
Apple mainly wanted a GPGPU solution that didn't tie them to a single vendor. I doubt that Apple had a lot of input in the technical side, since Apple doesn't design GPUs themselves, and doesn't have a lot of control over how NV and ATi design their hardware.
They just wanted NV and ATi to work together... Since NV had a good solution and ATi didn't, the NV solution was the starting point, and the standard doesn't stray too far from it.
This caused ATi to drop their previous GPGPU solution completely, and start over with their Stream SDK and CAL.
nVidia just split Cuda into 'C for Cuda' and the basic Cuda underpinnings... OpenCL will simply run on these Cuda underpinnings, now known simply as 'Cuda'. They didn't have to change much, they got it handed to them by Khronos.
 
Last edited by a moderator:
Well, that was my point: nVidia seems to have focused mainly on GPGPU in the past few years.
They're not doing badly either, because gaming performance, power consumption, heat and noise are still competitive with ATi. nVidia just has the added bonus of having a mature GPGPU API and an ever-growing collection of GPGPU software.

Their perf/mm2 ratio problem is mostly limited to GT200 and not its predecessors (compare R600<->G80, RV670<->G92 instead).

Alas? I don't see your point?
What's bad about G80 anyway? Heck, I have a Dell Laptop with a Cuda-capable Quadro GPU, so small-form-factor is absolutely no problem.
Depends what someone means with small form factor:

"From the get-go OpenCL is intended to address both high-end systems, mobile and embedded devices,"
http://www.hpcwire.com/blogs/OpenCL_On_the_Fast_Track_33608199.html and I'll come back later to that one....

Aside from that, I never said NV is the only party involved. I just said that the OpenCL and DX11 standards seem to have copied a lot of things from Cuda.
The original draft was handed in by Apple and they even intended to patent it at first for themselves. Sadly enough the file isn't available anymore:

http://forums.macrumors.com/showpost.php?p=6490415&postcount=1

...copied is far stretched. NV didn't invent the wheel with CUDA and it's perfectly understandable that for such APIs there will be quite a few similarities since the focus here is on heterogenous computing.

Apple is dependent on NV and ATi hardware.
Not only. Back to the embedded market; Apple has a multi-year multi license agreement with Imagination Technologies and their current and propably future iPhones whatever small handheld gadgets contain IMG IP. IMG is coincidentally on the board of Khronos promoters and their SGX IP is well suited for GPGPU.


Apple mainly wanted a GPGPU solution that didn't tie them to a single vendor. I doubt that Apple had a lot of input in the technical side, since Apple doesn't design GPUs themselves, and doesn't have a lot of control over how NV and ATi design their hardware.
They just wanted NV and ATi to work together... Since NV had a good solution and ATi didn't, the NV solution was the starting point, and the standard doesn't stray too far from it.
This caused ATi to drop their previous GPGPU solution completely, and start over with their Stream SDK and CAL.
nVidia just split Cuda into 'C for Cuda' and the basic Cuda underpinnings... OpenCL will simply run on these Cuda underpinnings, now known simply as 'Cuda'. They didn't have to change much, they got it handed to them by Khronos.
The "but" that breaks the above scenario is the fact that Apple's interest in an open standard heterogenous computing language is not limited to the markets you're focusing on. This should not mean that Apple won't continue buying hw from NV from AMD or NVIDIA, but their spectrum of interests is far wider and far more complicated then you seem to think.


***edit: and to come back to the real topic (since those quote orgies are downright silly....): http://s08.idav.ucdavis.edu/olick-current-and-next-generation-parallelism-in-games.pdf

Slides 203-204
 
Last edited by a moderator:
Their perf/mm2 ratio problem is mostly limited to GT200 and not its predecessors (compare R600<->G80, RV670<->G92 instead).

Again: what's this hangup about perf/mm2? What 'problem' is there anyway?
I already disqualified that argument earlier.
GT200 does well against ATi's offerings in all the areas I've mentioned. So what are you talking about?
In fact, technically G92 is the midrange equivalent of the GT200... G92 just happens to be on the market longer. But nVidia never replaced it with a downsized GT200, because in essence a downsized GT200 *is* a G92 (which again is a result of the lack of hardware evolution in the past few years). They just gave G92 a dieshrink, and that was that. G92 shouldn't really be seen as a predecessor to GT200 in my opinion. They belong to the same generation of hardware, more or less.

Depends what someone means with small form factor:

I thought 'small-form-factor PC' was a pretty common term.

The original draft was handed in by Apple and they even intended to patent it at first for themselves.

Nobody argued that Apple started the OpenCL initiative. There is no need for you to continue this line of argument.

...copied is far stretched. NV didn't invent the wheel with CUDA and it's perfectly understandable that for such APIs there will be quite a few similarities since the focus here is on heterogenous computing.

Let's put it another way:
ATi also had a 'solution' with CTM and Brook+.
OpenCL is far more similar to Cuda than to ATi's solution.
So your argument isn't a good one in this case.

Not only.

That wasn't really the point.
Perhaps I should have said: Apple depends on third-party suppliers for their graphics solutions. It's just that NV and ATi are the 'usual suspects', at least in the PC market.
One could also argue that in the near future, Apple might use Intel GPUs with OpenCL support (Larrabee).

The "but" that breaks the above scenario is the fact that Apple's interest in an open standard heterogenous computing language is not limited to the markets you're focusing on. This should not mean that Apple won't continue buying hw from NV from AMD or NVIDIA, but their spectrum of interests is far wider and far more complicated then you seem to think.

I was focusing mainly on the PC market, since the discussion was about ATi and nVidia desktop solutions.
That doesn't mean I am not aware of the fact that OpenCL goes beyond just this hardware, or that Apple also has other product lines than just desktops and laptops. So don't insult me by saying that my view is overly simplistic, just because this discussion happens to focus on only one part of the market. The thread *is* about GT300, you know.

I think it's pretty obvious when you're developing a hardware-independent standard such as OpenCL, that you try to keep it generic enough to make it scale to various hardware levels. That is not unique to Apple or OpenCL. Even Cuda scales from simple IGPs to high-end Tesla clusters.
In fact, OpenCL doesn't even require a GPU anyway. It is supported on multicore CPUs aswell. But I hope you're not going bring that up aswell... because I don't really see what you are actually arguing for or against... you just seem to want to throw in random lines of argument and somehow try to imply some connections... Is there a point to what you're trying to say? If so, can you formulate it into a coherent standpoint? That way there is no need for endless quotes, because there's an actual line of reasoning one can follow.
 
Last edited by a moderator:
Again: what's this hangup about perf/mm2? What 'problem' is there anyway?

To set a few things clear here (and to make folks of the audience happier here, not you obviously) I'm the former NVIDIA shill here and not you. The perf/mm2 sucks on GT200 and I'm entitled to my opinion whether you like it or not.

I already disqualified that argument earlier.

Am I still allowed to disagree? If yes, thank you in advance.

GT200 does well against ATi's offerings in all the areas I've mentioned. So what are you talking about?

Pretty much an opinion that many have shared and are sharing at the moment, irrelevant of their preferences (if there are any that is).

In fact, technically G92 is the midrange equivalent of the GT200... G92 just happens to be on the market longer. But nVidia never replaced it with a downsized GT200, because in essence a downsized GT200 *is* a G92 (which again is a result of the lack of hardware evolution in the past few years). They just gave G92 a dieshrink, and that was that. G92 shouldn't really be seen as a predecessor to GT200 in my opinion. They belong to the same generation of hardware, more or less.

I compared R600 to G80, G92 to RV670 and obviously GT200 to RV770 for a reason. RV770 packs far more performance per sqmm than GT200.

I thought 'small-form-factor PC' was a pretty common term.

Did I mention anywhere PC?

That wasn't really the point.
Perhaps I should have said: Apple depends on third-party suppliers for their graphics solutions. It's just that NV and ATi are the 'usual suspects', at least in the PC market.

If OpenCL development would be exclusively pushed by those two vendors, then I'd accept the above. Again Apple has wider ambitions and the iPhone and the upturn Apple's marketing machine has spent for it should tell you something.

One could also argue that in the near future, Apple might use Intel GPUs with OpenCL support (Larrabee).

I'm afraid power consumption wouldn't allow it that easily. There's a reason even Intel continues to use for small form factor devices like up to MIDs IMG IP and it doesn't sound like the integration of that will end anytime soon either.

I was focusing mainly on the PC market, since the discussion was about ATi and nVidia desktop solutions.
That doesn't mean I am not aware of the fact that OpenCL goes beyond just this hardware, or that Apple also has other product lines than just desktops and laptops. So don't insult me by saying that my view is overly simplistic, just because this discussion happens to focus on only one part of the market.

If you're too sensitive to hold a public debate then it's not my fault. OpenCL is and will remain a complicated issue and it will continue to be aimed also for mobile and embedded markets, otherwise companies like IMG, Erricsson, Freescale, Texas Instruments, Samsung, ARM and others wouldn't have much interest sitting in the Board of Promoters which costs money mind you.

The thread *is* about GT300, you know.

Time to come back to the real topic then....
 
To set a few things clear here (and to make folks of the audience happier here, not you obviously) I'm the former NVIDIA shill here and not you. The perf/mm2 sucks on GT200 and I'm entitled to my opinion whether you like it or not.

I don't disagree that the perf/mm2 'sucks' compared to ATi. Seems to be a very simple fact, deduced from the observation that nVidia GPUs with similar performance to ATi ones have a considerably larger die area.
I just don't see any relevance in this tidbit of a fact. I asked you to elaborate on the relevance, but you just ignored the question. So there apparently is no relevance... You just like to bring it up for random reasons?

Conversely, one could say that the power consumption/mm2 'sucks' on ATi boards, derived from the fact that an ATi GPU of a certain size consumes a lot more power than an nVidia one of the same size.
Which is equally irrelevant.
Or that ATi GPUs have poor price/mm2...

Am I still allowed to disagree? If yes, thank you in advance.

You're allowed to disagree, but it is poor form to keep reiterating points on which there is no common agreement.

I compared R600 to G80, G92 to RV670 and obviously GT200 to RV770 for a reason. RV770 packs far more performance per sqmm than GT200.

Again, why the focus on die-area?
Obviously nVidia and ATi GPUs are very different.
RV770 may pack more performance per mm2, but it is also more expensive per mm2, and it consumes far more power per mm2.
It seems that you just deliberately try to pick a 'metric per die-area' that happens to favour ATi, and ignoring everything else.
I don't see any reason to look at die-area in this sense.

Despite their different die-sizes, the two GPUs operate in similar priceranges, have similar performance, and have similar power consumption levels.

Did I mention anywhere PC?

Small-form-factor is a term to identify a certain type of PC. I assumed the PC part was implicit:
http://en.wikipedia.org/wiki/Small_form_factor
You seem to try to stretch the term to include embedded systems and mobile devices, but that would be your own specific definition, and not a commonly used one.

I'm afraid power consumption wouldn't allow it that easily. There's a reason even Intel continues to use for small form factor devices like up to MIDs IMG IP and it doesn't sound like the integration of that will end anytime soon either.

Obviously I was talking about the desktop systems that Apple makes. You're so far off-topic that you're now just talking rubbish.
Since they currently have nVidia or ATi cards, and Larrabee will be a card of similar specs, I don't see why Apple can't use Larrabee in future desktop products.
Power consumption won't be an issue.

Time to come back to the real topic then....

Well, you're the one bringing up all sorts of unrelated issues, such as ATi and their perf/mm2, or how Apple makes phones and all that. Don't blame me.
 
Conversely, one could say that the power consumption/mm2 'sucks' on ATi boards, derived from the fact that an ATi GPU of a certain size consumes a lot more power than an nVidia one of the same size.
Which is equally irrelevant.

Actually, far more irrelevant I think.

... and it consumes far more power per mm2.

Hmm... interesting metric. I realize that you are using this argument to point out that other metrics being touted are not necessarily relevant to the consumer, but I don't think that mentioning "power per mm2" as being equivalent helps you.

Debating the relevance of various metrics of "efficiency" to the end consumer is natural, since from the consumer's point of view the metric that they see is typically only performance per unit of cost. This can be improved by a manufacturer by either making parts that have higher performance at a lower manufacturing cost, or by accepting lower margins on a more expensive, but also high-performing part. The result from the consumer's perspective is the same, but from the manufacturer's perspective it clearly isn't.

But arguing that "power per mm2" is equivalent to the other metrics (and that they are all irrelevant) doesn't seem like a strong argument. Power per mm2 is patently ludicrous as a metric in isolation, while the other metrics have demonstrable worth. If you're aiming to choose a metric that is near-worthless in itself then you have succeeded, but it doesn't necessarily devalue other potential metrics by you doing so.

Performance per unit area has a direct effect on the end costs seen by the consumer. To argue against this is pointless - in the market in question here performance largely dictates worth, and area largely dictates cost, Performance per unit area is therefore the most direct metric that dictates the consumer's final costs unless GPU manufacturers are to operate as a charity and donate all their cards to the consumers.

In contrast, by deliberately packing unused area onto a die "power per unit area" can be apparently "improved". Ridiculous. It can also be improved by clocking the same chip at a lower rate with lower voltage. A highly inefficient chip that was only able to run 50% of its transistors at any one time due to imbalances or inefficiencies in the architecture could "win" in this metric compared to a chip of half the size that was able to run at near peak efficiency all the time.

If consumers believe that large chips, running at low voltage, with 99.9999999% of the die area unused are the best thing ever then this is great. The best chip in the world is now a large rock with no transistors on it consuming no power. I can see people lining up down the streets to buy them right now, and I guarantee R&D costs for GPU manufacturers are going _way_ the hell down... :)

Performance per unit area has relevance, albeit indirectly, to the consumer because it eventually dictates the prices that they will see.

Performance per unit of power consumed also has some relevance, but again in an indirect way - the system places constraints on the amount of power that can consumed, so the metric participates in dictating the highest overall performance that can be offered to the consumer. Performance is the key dictator of cost, so this then affects the prices seen by the consumer.

Power per unit area? Yes, it seems irrelevant, but how does this irrelevancy invalidate other metrics?
 
Performance per unit area has a direct effect on the end costs seen by the consumer. To argue against this is pointless - in the market in question here performance largely dictates worth, and area largely dictates cost, Performance per unit area is therefore the most direct metric that dictates the consumer's final costs unless GPU manufacturers are to operate as a charity and donate all their cards to the consumers.

In theory you would be right. In practice the end-user doesn't notice anything about different die areas, because prices are competitive.
Since manufactures are indeed 'operating as a charity' as you put it, your entire argument falls to pieces, the metric *is* irrelevant to the end-user.
That was exactly my point.

Despite your lenghtly post, you seem to fail to see that power per mm2 would be an interesting metric if you consider chips that are otherwise equal (same size, performance etc).
The same can be said for performance per mm2. Now if nVidia and ATi made chips of the same size, obviously the one with the best performance per mm2 would be the best performer.
But this is not the case. The nVidia chip compensates for lower performance per mm2 by being larger, and as such delivering comparable performance levels.

Ofcourse a larger die will also increase other factors, such as power consumption... But because that metric was smaller with nVidia aswell, there is little difference between power consumption of nVidia and ATi parts, and nVidia actually has a slight advantage.

Which leaves only the cost, which nVidia is more than willing to absorb in the overall price, so this is hidden from the end-user.
Hence, it doesn't mean anything.
 
Last edited by a moderator:
Charlie Demerjian said:
Nvidia's GT300 is set to tape out in June.
I guess now the Charlie-hate-bandwagon will flip. ;)

That puts the GT300 roughly around the same time frame of RV870 tape out (also Q2).
 
In theory you would be right. In practice the end-user doesn't notice anything about different die areas, because prices are competitive.
Since manufactures are indeed 'operating as a charity' as you put it, your entire argument falls to pieces, the metric *is* irrelevant to the end-user.
That was exactly my point.

[edit] Nice ninja edit.[/edit]
Despite your lenghtly post, you seem to fail to see that power per mm2 would be an interesting metric if you consider chips that are otherwise equal (same size, performance etc).

I will agree to temper my statements as I do see where you are coming from - taking the problem space as a whole there are circumstances in which power/area can become a constraint that is relevant to the consumer. But note that you were arguing that all these metrics are somehow irrelevant to the consumer, whereas I will turn it around and argue that any of these metrics can be _relevant_ to the consumer, but under the right conditions.

At any one time we can define reasonable GPUs that can be constructed by marking out regions of a volume that encompasses power, performance and cost (area). Any two elements out of those are not useful if taken in complete isolation, since you do not know if you lie within a permissible region. Within the interesting region you can pick a point that defines a product. Simplistically the cost is given by the area, the amount you can sell it for is given by the performance, and the markets in which it is viable are decided by the power.

Power/area can become a limiting factor if performance/area is very similar, and area itself is not the dictating factor in if the part is buildable. Power and area are therefore both hard constraints for a particular market - is a part viable or not?

So let's take a particular circumstance for two hypothetical parts from different manufacturers which lie within the "buildable" category for a particular market, and see which factors are dictating what the customer sees in the market, and which are not.

Total power dissipated - close (within 10% let's say)
Total performance - close (again let's say it's within 10%)
Total area - different (one part 50%+ larger than the other)

In this circumstance the market cost is likely driven by performance/area, so this has become the relevant metric to the consumer, while power/area is not. So in this circumstance (I will agree that it is also possible to construct a case where power/area is of interest) the end result the consumer sees is driven by performance/area.

No manufacturer is going to continue to operate as a charity (at least not for long). Not unless they are a charity, getting their income from elsewhere. They might choose to operate with lower margins for a period while market forces drive that necessity, but it is not a situation that they will choose to remain in if it can be avoided.

So to say which metrics are relevant to a consumer today and which are not let's look at the market and decide -

Are the prices that the consumer sees today being driven by parts with high performance per unit area or not? From what mechanism did the current prices that the consumer is seeing in the marketplace originate, and how did the current market pricing structure come about?

Did the market settle to its current state due to the release of of large parts with low performance/mm2, or was it driven by small parts with high performance per mm2?
 
Last edited by a moderator:
June tape-out for a Q4 release is indeed a tad aggressive; as a reference point, GT200 taped out in December and was released in June.
 
Status
Not open for further replies.
Back
Top