NVIDIA GF100 & Friends speculation

Do you still play Crysis or CW? What do you think will happen in Crysis 2 which will have DX11 renderer and - possibly - loads of tesselation?

And don't forget the whole GPGPU situation with physics, fluid or path tracing.
Why can't nobody leaks something what is really interessting?
 
Right.

there's an explanation to all the wild power usage / SP numbers.

It seems the 512CC will be reserved for the B1 part, which is currently slated for Q3.

295W is the power consumption of the part with 512CC and 725+Mhz Core (not going to see these for a while, I guess not everyone liked them.)
275W is the power consumption of the A3 part, with 480CC and 725+Mhz Core and 1050Mem i.e. OC GTX480 models
250W is the power consumption of the A3 part, with 480CC and 700Mhz Core and ~950Mem, i.e. GTX480.

So well.. everyone was right.

I also heard that power consumption is still an issue, especially idle power usage. Tomorrow should bring a new driver release as NV engineers are trying to get it down as much as possible.


On GF104, if I wanted to feed my "GF100 disabled TMU rumor" I'd say that GF104 also has 64TMUs. Power is again an issue.

This, coupled with various unimpressive at best benchmark leaks, =ouch, just ouch.

Trying to sell the card based on future features like tessellation is a fail. By the time those features are actually important the card will be long dead and gone and replaced by far better cards. That's always the case and it will be the case here, there is no doubt. Of course Nvidia has no option, so they will.
 
Sure! They have no need to release a whole new part, Fermi just isn't going to be that competitive with their stuff.

You forget they can also drop the price on their parts, which is kind of long overdue to happen. ;)

Do you know what the word competitive means Digi? If you do then your should realize your statement makes no sense at all.
 
......or it could be doable in software with no penality ?
Doubtful. Highly doubtful. If something can be done in sw with no penalty, it would be done in sw. But for an operation as simple as bit manipulation, doing it in hw makes much more sense.

...but GT200 ALUs don't have it ,
AFAIK, both G80 and GT200 alu's have it. Stuff like this is explained in hardware docs.

will they need software emulation to do it ?
If they don't have hw support for it, then of course they'll need sw emulation for it.
 
Trying to sell the card based on future features like tessellation is a fail.
It's called product differentiation and it's a well-known and highly successful marketing technique. If you are at a store, and you see two cards that are about the same price, and you know beforehand have similar performance, but one offers better DX11 functionality, which one would you probably get, all other things being equal?

The basic business argument for doing this is simple: by making it so that there are as many as possible different ways in which their product is better than the competition, nVidia ensures that if they fail to deliver on any one of those things, the others will at least partially make up for it. So obviously nVidia would prefer to have a much higher-performance part for the same manufacturing cost as ATI. But by making it so that their hardware can do other things (PhysX, better tessellation, etc.), nVidia makes it so that even if they get outclassed in performance, people will still have reasons to buy their hardware.
 
Do you know what the word competitive means Digi? If you do then your should realize your statement makes no sense at all.
Heck, I'm predicting now that ATi will not lower prices. Why? What's the need?

You should also factor in availability when figuring stuff like this. If Fermi launches in just token quantities there will be no need for ATi to lower prices, and it won't be competitive with the 58xx series until it is available in quantity.
 
WRT cpu/gpu scaling and amdahl's law.

How open is the PIX frame capture format? Could it be possible to capture a DX11 frame, generate the command buffer and repeatedly resend that buffer?
It would remove most of the CPU load, just measuring GPU (and driver) performance.
Can the DX11 pre-generated command buffer be resent or is it trashed by sending it to the GPU? Is there a PIX sdk? (Yeah, google that for me, im lazy like that :p, gtg now, will google later)
 
Heck, I'm predicting now that ATi will not lower prices. Why? What's the need?

You should also factor in availability when figuring stuff like this. If Fermi launches in just token quantities there will be no need for ATi to lower prices, and it won't be competitive with the 58xx series until it is available in quantity.

I'm really struggling to see how 480 is going to command 499 given these benchies...maybe in Nvidia's dreamworld...

I'm sure the first few will sell at that, the fanboys alone will snap them up especially if supply is as bad as rumored, but I cant see any sustained 480 demand at 499.
 
But by making it so that their hardware can do other things (PhysX, better tessellation, etc.), nVidia makes it so that even if they get outclassed in performance, people will still have reasons to buy their hardware.

So that explains why GTX 2X0 and now GTX 4X0 command such a price premium versus game performance...oh wait, they dont...in fact their price exactly matches their performance.

The only outlier being the mooted 480, and as I say, i dont see it lasting at 499 at all.

BTW, from some forum reading, it seems most people see Physx as a lame marketing tool. It's common knowledge most Physx effect could easily be done on the CPU if it wasn't purposefully gimped to give Nvidia a marketing tool.
 
And don't forget the whole GPGPU situation with physics, fluid or path tracing.
Why can't nobody leaks something what is really interessting?

So true, Fermi was presented more as GPGPU than yet-another-GPU to accelerate Crysis. I hope there will be a little corner in the incoming reviews with some test dedicated to GPGPU computing.

I understand that the entrainment market is eating about all GPU sold nowdays but there should be the space for some more specialized review.
 
So that explains why GTX 2X0 and now GTX 4X0 command such a price premium versus game performance...oh wait, they dont...in fact their price exactly matches their performance.

The only outlier being the mooted 480, and as I say, i dont see it lasting at 499 at all.
I don't think anybody expects the price premium to be terribly dramatic. However, I should point out that another big reason for the strategy is OEM deals, where checkbox features have long been very important.

Anyway, I'm not entirely sure I agree that there is no price premium. But I'd have to spend some time collecting data, which I don't feel like doing right now. Let me just reiterate that I wouldn't expect a large one. It's also a bit difficult to infer the benefit seen by consumers when you don't have sales numbers as well: if there is no price premium, but nVidia is getting much better sales, for instance, then obviously there's some benefit there.

BTW, from some forum reading, it seems most people see Physx as a lame marketing tool. It's common knowledge most Physx effect could easily be done on the CPU if it wasn't purposefully gimped to give Nvidia a marketing tool.
Still, all other things being equal, would you or would you not buy the part that supports PhysX? Imagine a fantasy situation, for instance, where you have the option of a GT200 with PhysX support, and one without, at the same price. It may be a small benefit, but it's a benefit nonetheless.
 
How do you know that the unchanged part of the GPU is leading to the supposed vast loss in scaling? Or, if you prefer, what makes you think that these unchanged parts are a significant bottleneck on these games.

Because the performance gain going from a middling CPU to a very powerful one is often negligible. You can blame PCIe or system bandwidth but there have been enough tests showing that higher PCIe or system bandwidth is even less relevant to game benchmarks. So what are the other potential culprits if not the GPU itself?

These parts of the architecture are definitely a scaling limitation, but I've not seen it quantified anywhere.

Well there are micro-benchmarks that target specific functions. The problem is that nobody really picks apart a game to see what it's doing within each frame. What we need is a combination of PIX and NVPerfHud (or AMD's equivalent).

In the hypothetical 35% scaling case you first have to show the game can scale better.

The game can be scaling poorly because it is bandwidth or setup bound. Does that mean the game is inherently not scalable or does it simply mean there wasn't an adequate increase in bandwidth or setup in proportion with other things?

At work I'm not allowed to blame the workload if my design isn't adequate. That applies here as well.

Do you still play Crysis or CW?

It's still a useful benchmark. I'm waiting to see AA vs noAA benches as that will give a clue as to whether things are capped by available bandwidth.

Trying to sell the card based on future features like tessellation is a fail.

Well to be fair tessellation is a current feature. It's just being used poorly.
 
Still, all other things being equal, would you or would you not buy the part that supports PhysX? Imagine a fantasy situation, for instance, where you have the option of a GT200 with PhysX support, and one without, at the same price. It may be a small benefit, but it's a benefit nonetheless.

Therein lies the problem -- in your fantasy example, the GT200 that supports PhysX would play games at half the framerate of the GT200 that doesn't.

Given that, so far, none of the PhysX effects that I've seen have really "wowed" me yet, I'd go for the GT200 that wasn't addled with the epic performance hit.
 
Back
Top