NVIDIA GF100 & Friends speculation

But you've picked the one which scales best - by far.
I'm simply excluding games that aren't scaling :oops:

Arma 2 also scales about the same - and was doing so back at HD5870 launch.

Also it's without AA actually - IIRC the performance increase it typically a bit less with AA (unless the score without AA would be cpu limited of course), probably due to the not that much increased memory bandwidth.
HD5870's only got about 20% extra bandwidth. Even if you say that HD4890 only really needs about 100GB/s of bandwidth, HD5870's still got far less than double the bandwidth. Though since MSAA uses bandwidth "non-linearly" depending on scene complexity (degree of compression), it gets pretty tricky to say.

Comparing to a single Juniper it comes in fairly consistently at 90-100% faster. Comparisons with dual-Juniper are undermined due to what the driver is doing CPU-side, i.e. some of the CPU workload is being accelerated by CrossFire - it's not just the pair of GPUs that make things go faster.

Jawed
 
I'm simply excluding games that aren't scaling :oops:

And in doing so you're biasing the analysis based on arbitrary criteria. How exactly do you determine that the lack of scaling is the game's fault and not a hardware deficiency? Off-topic but I'm interested to know.
 
Comparing to a single Juniper it comes in fairly consistently at 90-100% faster.
That's not what I remember. Quite the contrary IIRC it is very, very rare HD5870 is more than 80% faster than HD5770 (except some synthetics). Not even the metro link you posted exceeds that.
 
When considering tesselation and its use in games you also have to consider that basically you have products ranging from sub $100 to $400 with the same per-clock sub division rate with the AMD products. When NVIDIA starts producing their derivative products their tesselation/geometry performance is going to be scaled back - lets say that GF104 is half the shaders engines (there is history to suggest that this may be the case) then it has half the tesselation/geometry performance, and the products further down the line have even less.

So, do we really think it makes sense for developers to solely focus on one single high-end product?
What's stopping them from implementing tesselation detail levels?
Most of what's out there with DX11 tesselation right now is already unplayable on anything less than 5850. So why would developers bother with these $100 to $200 DX11 products of yours at all?
Plus having half of geometry engines of GF100 is still two times more than what Cypress has.
 
What if Nv does not care to have the best graphic card but fast just enough while promoting the gpgu part ? If they think that gpgpu computing will be their future market, they could try the same strategy Sony made with the PS3 to win the Bluray war

Sony first won the br battle, then lost the console war, and IMHO br is stuck at a stalemate with dd right now. All in all, not a very inspiring analogy.
 
When considering tesselation and its use in games you also have to consider that basically you have products ranging from sub $100 to $400 with the same per-clock sub division rate with the AMD products. When NVIDIA starts producing their derivative products their tesselation/geometry performance is going to be scaled back - lets say that GF104 is half the shaders engines (there is history to suggest that this may be the case) then it has half the tesselation/geometry performance, and the products further down the line have even less.

So, do we really think it makes sense for developers to solely focus on one single high-end product?

Why should geometry performance remain static as you scale shader performance? Tesselation requires hull and domain shading, which aren't done by the fixed function tesselation engine. Not to mention the shading complexity for the other parts of the rendering engine.

You can't benefit from high tesselation performance if you've scaled down the shaders to the point where the chip is completely bottlenecked somewhere else.
 
Most of what's out there with DX11 tesselation right now is already unplayable on anything less than 5850. So why would developers bother with these $100 to $200 DX11 products of yours at all?
Unplayable at max settings on 2650x1600 is not the same as unplayable, period.
 
After following these proclamations for a while I've come to the solid conclusion that nobody knows anything when it comes to IHV costs. It's usually a bunch of huffing and puffing that never materializes in financial results. For example, where is all the money that AMD was supposed to make on those cheap dies last generation? Where are those fantastic margins?
These things rarely go in such a straight line. Neither the leading chip makers start making losses, nor the lagging ones break out into leadership overnight.

In the first stage, the leader loses it's historical margins, then it loses it's market share and only after that the leader and the laggard begin to switch.

NV certainly has suffered lower margins and lost market share to AMD in the last ~2 years. I think Q1 results will throw better light on this.
 
You can't benefit from high tesselation performance if you've scaled down the shaders to the point where the chip is completely bottlenecked somewhere else.

Well the argument is that you need some minimum level of geometry/tessellation performance and therefore AMD's lowest common denominator approach makes sense. But that argument gets washed away the minute developers provide in-game sliders for geometry LOD, just like they do for AA, AF etc.

These things rarely go in such a straight line. Neither the leading chip makers start making losses, nor the lagging ones break out into leadership overnight.

For market share sure. But turnover is sufficiently high in the GPU industry that any reduction in costs would show up in a quarter or two. AMD has had tiny chips since ~ 3870 days right?
 
What's stopping them from implementing tesselation detail levels?
Most of what's out there with DX11 tesselation right now is already unplayable on anything less than 5850. So why would developers bother with these $100 to $200 DX11 products of yours at all?
Plus having half of geometry engines of GF100 is still two times more than what Cypress has.

We will first need to know what fps drop will have gtx4xx in real games with tesselation.
But If i would need to chose betwen tesselation thats barely visible (at least in games today) or double fps i take the second one :rolleyes:
 
There's a lot of wiggle room between "good place" and "disaster". I don't believe it's going to be only 10%, just saying that these predictions of doom are unfounded. IIRC the 285 is only 10% faster than the 4890 and didn't earn a disaster badge.

I think its more to the fact that they are late. If the gtx 480 is 10% faster and came out in nov it would not have been a disaster.

However its going to be slightly over 6 months till you can buy one of these (sept 26th to april 6th). Even at 250w its going to use much more power than the 188w rating of the 5870. It seems like it will cost $100 more and more troubling is that an ati refresh can see the light of day any time now factoring in that its over 6months.

I would nail a cypress refresh in June which will give the gtx 480 two months of performance lead while still loosing to the 5970.

The window for the gf 100 products seems to be very small
 
After following these proclamations for a while I've come to the solid conclusion that nobody knows anything when it comes to IHV costs. It's usually a bunch of huffing and puffing that never materializes in financial results. For example, where is all the money that AMD was supposed to make on those cheap dies last generation? Where are those fantastic margins?

This doesn't account for sunk costs (R&D) at all, nor variable costs such as manufacturing. AMD's GPG had good margins during the RV770 timeframe but failed to realize a profit not through any lack of margins, but rather volume necessary to overcome costs.

Besides, Nvidia can safely sell a card that is 10% faster for 20% more money. They've got the name, reputation, marketing and "single-fastest GPU" halo to hold on to.

I completely agree.
 
Sony first won the br battle, then lost the console war, and IMHO br is stuck at a stalemate with dd right now. All in all, not a very inspiring analogy.

I think the term "lose" is a bit harsh here. The PS3 is still selling, and doing much better in terms of both volume and margins thanks to the Slim.
 
For market share sure. But turnover is sufficiently high in the GPU industry that any reduction in costs would show up in a quarter or two. AMD has had tiny chips since ~ 3870 days right?
Yes, but 3870 wasn't competitive enough with G92. Things changed RV770 onwards. Those points are more applicable after that point. GF104 vs 5830/5770 will be a better decision point this gen.
 
Plus having half of geometry engines of GF100 is still two times more than what Cypress has.

Yes, but the setup/raster bottleneck relaxes faster than the shading power as geometry load reduces (other things being equal).

I think that's the point Dave is trying to make. Am I right?
 
Yes, but 3870 wasn't competitive enough with G92. Things changed RV770 onwards. Those points are more applicable after that point. GF104 vs 5830/5770 will be a better decision point this gen.

If it is GF104vs 5830/5770. IT could easily be the gf 104vs 5830/50
 
Back
Top