AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Given the lack of actual data, I'd like to hijack this comment for a quick education about better discard: what is it and what are the opportunities that are allegedly still open to be exploited?

I'm assuming the better discard is geometry based? What is possible other than backface removal?

Or they talking about better pixel discard? And, if so, is there a lot of thing to be improved there?

Wouldn't discarding geometry also discard its pixels? Regarding improvements in discard you could do tile based hidden-surface removal similar to TBDR architectures - effectively getting zero overdraw.
 
PCGH-RX-480.png


http://extreme.pcgameshardware.de/n...testlabor-test-am-29-juni-57.html#post8297454
 
Could be fake, not sure ...
indexjwu1f.png
 
Last edited by a moderator:
NVIDIA always say a lot of things.... It's called marketing.
It's not about AFR. Its about frame buffering transferring and resource coupling. Limit bandwidth impact both and both are engaged in every multi-GPU techniques. Yes, there are some tricks, like MSAA upscaling, but those tricks are not a real solution.
Bandwidth is one the biggest issue on today's computing.
You say BW is the biggest issue in today's computing. And it's not about AFR, it's about the other stuff.
An SLI bridge off loads sending a complete image from the PCIe bus, freeing up to 2GBps for that other stuff that you care about.

2GBps is more than enough to transfer a 4K image at 72Hz, allowing for 144Hz aggregate, so that's plenty for most high end solutions today. It's also plenty to transmit one eye of 2kx2k VR images at 90Hz.

So exactly what are you griping about? The fact that it costs a bit more? Is that it?
 
Excellent meta-analysis by videocardz.com who accumulated the statistics from 330 different submitted early-bird RX 480 Firestrike benchmarks. The most interesting data was the frequency of each GPU, giving us a good estimate of expected and maximum clocks. Quoting from the article, "The highest observed clock was 1379 MHz (most samples oc to 1330-1350 MHz), with median of 1266 MHz (stock boost clock). The memory can be overclocked up to 2200 MHz, although most samples are clocked up to 2100 MHz."

There is also discussion of the benchmark values themselves, but IMHO no single benchmark is a good way to draw performance comparison conclusions. We'll have the official reviews in just a few days which will likely present a dozen different games and benchmarks.
 
Last edited:
One thing I've been always curious about. Does the increase in selling products actually compensates all the money company lost when they take away 1 dollar from the product actual selling price?

Enviado desde mi HTC One mediante Tapatalk
 
well its volume over margins, companies will estimate what the best margins they can get based on most volume they can get at a given price. If done right, yeah they should end up with more money overall.

You might take a hit on volume if you can get higher margins if you think the higher margins can compensate for the loss of volume, and vice versa.
 
I don't think Polaris is profitable at this point and when you add in development costs, I don't think it ever will be.

The last time we had cards this cards this cheap on a new node with a comparable size was the 3870 and 3850 and these were 220 and 180 dollar cards. These cards were 20 percent smaller and on a vastly cheaper node. Add in inflation in well over 8 years and it doesn't make sense that the rx 480 and the newly demoted rx 470(the cut down of polaris) is 200 and 150 respectively. And it all has to do with costs. Lets look at wafer cost first.

0911CderFig2_Wafer_Cost.jpg


Of course these are first run wafer costs, but the point still stands. If AMD had any time to raise graphic card prices, it would have been completely justified this time around. But for some reason, its never been cheaper. Add in development costs as seen below and basically Kyle Bennett might be right, in AMD entire graphic product stack might have collapsed, not because they are bad cards those, but because they are unprofitable.

image001.jpg


So why did AMD price their cards so low then. The answer is competition.

The 1070 for marketing purpose and it's supposed 379 dollar price is a price killer for AMD. If the 1070 is 35-40% faster than a rx 480, then the 200 and 229 price wasn't a consciously made price point for them. It was forced upon them. This is because Nvidia is simply the stronger brand and has the greater marketshare. This means at similar pricepoints and similar price to performance, nvidia will take marketshare away from AMD. If AMD priced their rx 480 at 300 dollars, it would be a repeat of tonga vs the gtx970/980 as far as marketshare bleed.

I think from the slides shown to us in January, AMD wanted to price this chip in the 350 range because AMD initially indicated this to be the sweet spot range. That's was also the price of pitcairns which was again made on a cheaper node, and was smaller.

Hopefully for AMD sake, the gm206 arrives late and Nvidia pricing isn't aggressive.
 
The 1070 for marketing purpose and it's supposed 379 dollar price is a price killer for AMD.

Yeah, if you want to take MSRP as is, the problem is with what Nvidia have done with the FE this time around, no one is pricing their cards at MSRP. You can't find a decent 1070 sub $399. For all their scummy moves, this must be their crowning achievement (the FE pricing).
 
The thing is although $379 is 100 percent marketing and not retail, and doesn't exist in reality right now, AMD needs to take this price into account because Nvidia could easily make this price a reality. This dual pricing was a greedy move but it is effective for marketing purposes.
 
Status
Not open for further replies.
Back
Top