NVIDIA GF100 & Friends speculation

runaway.gif

Bah this doesn't quote the original quote you were referring to...

But I think the whole thing about AMD not being worried isn't with regards to performance, but more the fact that even with Fermi out and probably winning benchmarks, AMD still expects to sell every single 5870/5850 they make at their current price.

In other words, AMD isn't expecting Fermi to impact sales of 58xx in any way, shape, or form if availability rumors are true, until possibley Fermi - B1.

What I'm currently expecting. No price adjustment by AMD in response to Fermi initially. Assuming Fermi availability is as low as rumored for ~2 months after launch, then 58xx should continue to sell at current prices at current levels. If Fermi availability is significantly higher than rumored, AMD "might" adjust the price.

Regards,
SB
 
Has anyone read this?

GeForce GTX 400 series available in higher volume than Radeon HD 5800?

...there should be more than 50,000 Fermi boards [this may not be the final figure, as we could not contact all AIC vendors and OEMs] available in the first 10 days of sales....
According to our sources close to heart of the company, nVidia wants to overtake the number of Cypress GPUs [both Hemlock and Cypress, i.e. 5800 and 5900 Series] by the end of this summer.

Apparently there was onlu 20000 Cypress GPUs at launch.

Does this even make sense?
We also learned that nVidia shifted allocation of its 40nm wafers from lowly DirectX 10.1 parts into big-die Fermis - those numerous 40nm DX10.1 GPUs weren't only nVidia's plan to capture those 80% of Arrandale designs, but also to keep the highest possible allocation of 40nm process node for itself, and appropriately shifting the allocation from cheap DX10.1 GPUs to high-end Fermi dies. In any case, a clever strategy.
 
How is 130 Watt more than 5870 possible with 250 Watt TDP vs 188 Watt TDP? Total nonsense....

TDP is thermal design point for a chosen temperature (safety margins can be different for each gpu and manufacturer) and not power draw.
The same way as the TDP can be overestimated it could be maybe also underestimated :rolleyes:.
 
Bah this doesn't quote the original quote you were referring to...

But I think the whole thing about AMD not being worried isn't with regards to performance, but more the fact that even with Fermi out and probably winning benchmarks, AMD still expects to sell every single 5870/5850 they make at their current price.

In other words, AMD isn't expecting Fermi to impact sales of 58xx in any way, shape, or form if availability rumors are true, until possibley Fermi - B1.

What I'm currently expecting. No price adjustment by AMD in response to Fermi initially. Assuming Fermi availability is as low as rumored for ~2 months after launch, then 58xx should continue to sell at current prices at current levels. If Fermi availability is significantly higher than rumored, AMD "might" adjust the price.

Regards,
SB

I think i'll wait for the 32nm version of Fermi ala 92b.
 
Seems to me he is regurgitating neliz's info incorrectly. neliz AFAIK said massive volume for B1 in May/June and Nvidia hogging 80% of 40nm wafers at TSMC. Given how pathetic Theo has been thus far, I'd put less faith in it than even Fudo's.

Well ignoring that the notion that they're using GT21x@40G allocation for GF100 wafers isn't irrational at all. I don't know of course if that's the case but it makes sense considering how supply constraint 40G at TSMC still is. If memory serves well Fab12 has a current maximum capacity of 80,000 wafers/quarter and TSMC expects that number to double in Q3 when Fab14 starts operating too.

It's hard these days not to detect at least one error in relevant speculative newsblurbs. Example here: http://www.digitimes.com/news/a20100322PD205.html NVIDIA didn't cut back the 480 to 15SMs IMO due to yield problems but rather due to some awkward design problem.

If now I combine the BSN rumor for roughly 50.000 SKUs in the first two weeks and digitimes rumor about a tad below 50% yields it could mean roughly (+/-) 1000 wafers and you're back at square one with self conflicting data.
 
TDP is thermal design point for a chosen temperature (safety margins can be different for each gpu and manufacturer) and not power draw.
The same way as the TDP can be overestimated it could be maybe also underestimated :rolleyes:.
While this is in principle possible, IHV's must be very careful not to or else they run the risk of producing unstable products. Therefore I think the expectation is that TDP is, as a rule, overestimated.
 
While this is in principle possible, IHV's must be very careful not to or else they run the risk of producing unstable products. Therefore I think the expectation is that TDP is, as a rule, overestimated.

It would be a disaster if the TDP of any product wouldn't include a safety margin. If any IHV would set the TDP for product X too tight, I wouldn't want to know how many would fry those with even conservative over-clocking exercises.

Isn't 32nm process for graphics cards canceled?

28nm makes way more sense for TSMC.
 
Again you're selecting titles for inclusion in the analysis based on whether they "scale" or not. Since the purpose of the analysis is whether or not an architecture scales you're just sweetening the pot and your results aren't meaningful. That's the equivalent of looking at the past performance of fund managers and picking the best performers and then claiming you're great at picking good fund managers.
So, you think using CPU-limited games in your analysis is a good idea?

In any case what's the definition of a game that doesn't scale? Resolution isn't the only determinant of workload or performance so that can't be the only measuring stick. I would argue that if an architecture doesn't address bottlenecks that in itself is an issue with the architecture, not the application.
How can an architecture address a CPU bottleneck?

Amdahl's law is always lurking - some games fall victim to it pretty early, some just keep on going. For some bizarre reason you want to include games regardless.

You want to believe that when a game is only 35% faster on HD5870 than on HD4890 that it's an architectural fault. You don't provide any evidence it's an architectural (or configuration of architecture) fault, but you (and others) repeatedly state that the architecture is failing.

I suspect the architecture is failing, but my focus is on the way it handles highly tessellated workloads. I'm doubtful tessellation performance will scale with future GPUs without some change, but I can't work out what the cause of the bottleneck currently is.

Jawed
 
I suspect the architecture is failing, but my focus is on the way it handles highly tessellated workloads. I'm doubtful tessellation performance will scale with future GPUs without some change, but I can't work out what the cause of the bottleneck currently is.

Why should it not scale up?
 

power-load.gif



GTX 285 has 204W TDP. Load power is 330W.
GTX 480, taking 250W, has 45W more. 375W or so?

I'll give +-10W due to measurement errors, 385W.

Techreport's HD5870 is 290W.

If nApoleon is using a lower-volted HD5870, or a test that doesn't stress Cypress that much, system load could end up more like the 5850... 255W.

130W could be a bit of a stretch, but 90-100W isn't, considering how thrifty the RV800 archi has been in real life workloads compared to its board power/TDP.
 
How is 130 Watt more than 5870 possible with 250 Watt TDP vs 188 Watt TDP? Total nonsense....
I forgot to consider PSU efficiency.130W is total system power consumption difference,if PSU efficiency is 80%,card power consumption difference should be about 100W.
 
Last edited by a moderator:
So, you think using CPU-limited games in your analysis is a good idea? How can an architecture address a CPU bottleneck?

Of course not. And it can't.

Amdahl's law is always lurking - some games fall victim to it pretty early, some just keep on going. For some bizarre reason you want to include games regardless.

Yep, because Amdahl's law is just as relevant for things happening solely on the GPU.

You want to believe that when a game is only 35% faster on HD5870 than on HD4890 that it's an architectural fault. You don't provide any evidence it's an architectural (or configuration of architecture) fault, but you (and others) repeatedly state that the architecture is failing.

If we were to simplify the rendering of a frame into two stages - sequential (CPU) and parallel (GPU) - the only way you achieve 35% higher performance after doubling GPU resources is to start out with your CPU workload representing about 30% of rendering time. Are you willing to assign the CPU 30%?

Nobody is saying the architecture failed. Just that there is some process on the GPU that isn't much faster than it was on prior architectures. This in effect represents the unchanged sequential part of the workload when comparing the two cards. So who's fault is it that particular process isn't any faster between hardware generations? The IHV or the ISV?
 
Right.

there's an explanation to all the wild power usage / SP numbers.

It seems the 512CC will be reserved for the B1 part, which is currently slated for Q3.

295W is the power consumption of the part with 512CC and 725+Mhz Core (not going to see these for a while, I guess not everyone liked them.)
275W is the power consumption of the A3 part, with 480CC and 725+Mhz Core and 1050Mem i.e. OC GTX480 models
250W is the power consumption of the A3 part, with 480CC and 700Mhz Core and ~950Mem, i.e. GTX480.

So well.. everyone was right.

I also heard that power consumption is still an issue, especially idle power usage. Tomorrow should bring a new driver release as NV engineers are trying to get it down as much as possible.


On GF104, if I wanted to feed my "GF100 disabled TMU rumor" I'd say that GF104 also has 64TMUs. Power is again an issue.
 
Last edited by a moderator:
Back
Top