AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
I had posted a much stronger rant about the upthread armchair analysis on the perf/W, swiftly removed to rethink how I make the point. It boils down to this: GPUs are the most difficult semiconductor device type to create these days, because of their size, complexity, feature set and everything else. So there are going to be differences baked into the designs of GPUs from two different vendors, almost across the board, from the microarchitecture all the way through to the physical aspects, never mind the software.

So I don't know what miracles everyone expects. Heck, I don't even know why I'm trying to argue it out. It's just disappointing, knowing how difficult it is to design a functioning GPU that can be shipped into the market, to see armchair analysis take big swings at the hard work of a company that has limited resources and can't afford to take giant risks in the microarchitecture.

We've been discussing GPUs here at Beyond3D for over a decade. We mostly know how it goes. The hype from the vendor ahead of time doesn't really factor in to things. That's marketing. The leaks from the myriad industry-ruining piece of shit news websites doesn't really factor in. They're toxic.

Given the foundry tech available and how their design works, and their inability to just throw it all out, I was expecting the transistor tech to take them most of the way across the line on this one. That's what we got with Pascal. And I think that's what we have here, just with what looks like GF14 doing quite a bit worse for them. That's sad, but that's possibly not their fault given capacity and their plans for volume.

The end result is something with a nice set of features and good perf/$, at competitive perf/W with the existing chips in the segment, that's worth buying.

It seems that the lack of R&D funding that AMD could put into this design (along with having to use GF) directly resulted in what we now see.

Nvidia's R&D funding is much much more than AMD's and it shows.
 
Is the excess power draw from the PCIe slot dangerous in anyway?
Also any review tell whether or not the 480 has conservative rasterization?
edit - and other D3D12 features?
You need to know what Current/W your PSU for the PCIe rail and wire gauge supports for peak and sustain.
If either is under spec they will start to overheat.
Considering this was being targetted and promoted to budget conscious consumers, it is something that might be a headache but only if OCing.
The Molex PCIe connector is rated at 8A, but that is the only one universally guaranteed at this higher demand

Maybe this is why AMD is leaving it to board partners to offer OC cards, their problem not AMD :)
Although there is still the pitfall of anyone going 2x480 and demand over the mainboard slot.
Anyway caveat is that is the results for 1 card, and ideally you need to see it for more 480s, just to see if it varies between the silicon lottery draw, or if that review card was a poor example.
Cheers
 
Last edited:
So I don't know what miracles everyone expects. Heck, I don't even know why I'm trying to argue it out. It's just disappointing, knowing how difficult it is to design a functioning GPU that can be shipped into the market, to see armchair analysis take big swings at the hard work of a company that has limited resources and can't afford to take giant risks in the microarchitecture.
Historically, we've seen the pendulum of power and performance swing between Nvidia and AMD. With 28nm, Nvidia took a huge step forward from Fermi to match AMD and then leapfrogged them with Maxwell. After introduction of GCN, AMD has essentially come to a standstill: their only exciting silicon thing was in 4 years was HBM (which they half-way screwed up).

Yes, AMD has limited resources, and when they didn't make any major jumps in power efficiency, we gave them a pass, saying that they didn't have enough design resources to do so in 28nm, so they postponed it to 16nm. This belief was seemingly validated by their claims 6 months ago.

And they they release a chip with a perf/W efficiency that just matches Maxwell in 28nm? That's not a fab issue. I can imagine that it's tough for an AMD engineer to see the reactions to something on which they have probably worked incredibly hard, but do you expect everybody to give them a well meaning pat on the head and say "it's really not too bad?". (At least, those engineers should have made a pretty decent penny on their stock, going from $1.62 to $5.2.)

Given the foundry tech available and how their design works, and their inability to just throw it all out, I was expecting the transistor tech to take them most of the way across the line on this one.
What does it mean "across the line on this one" ? AMD announced power efficiency changes of 2.5x for Polaris. A number that would have made them even with Pascal. Instead, it matches Maxwell. That's just pitiful. Polaris doesn't seem to have any obvious architectural changes in Polaris that targets power efficiency, not the way Maxwell SM changes were obviously a positive for power efficiency.

That's what we got with Pascal. And I think that's what we have here, just with what looks like GF14 doing quite a bit worse for them. That's sad, but that's possibly not their fault given capacity and their plans for volume.
Pascal could afford to be just a shrink of Maxwell. Polaris could not. You're asking for leniency for the AMD engineers and shifting the blame to GF. What about the feelings of the GF engineers? It's not as if GF is a company that's swimming in money...

The end result is something with a nice set of features and good perf/$, at competitive perf/W with the existing chips in the segment, that's worth buying.
For a company, good perf/$ is a bug, not a feature. Competitive perf/w for how long? A month?
 
I am really happy about the architectural changes AMD did. It seems that console devs have given them lots of feedback about the choke points.

I personally especially like improved geometry processing, improved MSAA performance and improved DCC. GCN4 suits my MSAA-trick technique (see my SIGGRAPH presentation) so much better than GCN2. Going to be interesting if people can reach 4K with techniques like this. I also like degenerate triangle early culling (hoping this also means that strip cut index is "free"). Native int16/fp16 support is also great in reducing the register pressure. Most of my integer math could be ported directly to int16.
 
The hype from the vendor ahead of time doesn't really factor in to things.

It's not reasonable to ask folks to ignore AMD's promises when the reality is laid bare for all to see. This isn't happening in a vacuum.

Also it may be Glofo's fault but the fact is compared to Pascal, Polaris looks quite mediocre. Add to that the completely misleading hype from AMD and they deserve all the criticism that they get.
 
Please.... AMD did this to themselves with all the absurd efficiency claims leading up to the launch. It is marginally better or on par with chips produced on 28nm; it is not. AMD had to know the numbers. The solution is simple... don't hype up your weakest point.
AMD said 150W typical board power at Computex. What hype do you believe you saw from them, honestly? The missing piece was performance, which we didn't know about in full until today. And it's more than just on par with their 28nm. Yes it's not as good as Pascal, but that's not what was promised.
 
Also it may be Glofo's fault but the fact is compared to Pascal, Polaris looks quite mediocre. Add to that the completely misleading hype from AMD and they deserve all the criticism that they get.
And compared to their own prior products it looks really good. Yes it would be nice if they'd get closer to Pascal, but this isn't the mid 2000s. Today's AMD can't perform like that. The hype was FinFET (which played out) and 150W (which played out). Everything else is in your head, probably driven by the shitty websites where we get most of our GPU "news" from these days.
 
Please..... You known damn well you saw the slide (and speech) too. I'm not going to engage in conversation with anyone being purposefully obtuse.
I'm not being obtuse. Show me the slide that's driven your disappointment so we can figure out where AMD went wrong. They'll read this thread, maybe you'll help them.
 
And compared to their own prior products it looks really good. Yes it would be nice if they'd get closer to Pascal, but this isn't the mid 2000s. Today's AMD can't perform like that. The hype was FinFET (which played out) and 150W (which played out). Everything else is in your head, probably driven by the shitty websites where we get most of our GPU "news" from these days.

AMD also claimed a 2.8× performance/watt improvement. In practice, it's barely 2× from Tonga, let alone Fiji.
https://www.techpowerup.com/reviews/AMD/RX_480/25.html

In fact, based on TPU's perf/W data at 1080p, if you took the R9 290X's efficiency and multiplied it by 2.8, you'd end up somewhere between the GTX 1070 and 1080.

AMD promised way too much, didn't deliver, and now people are bitterly disappointed. It's hard to blame them. Hell, Pitcairn reaches 74% of Polaris 10's efficiency.
 
Last edited:
AMD said 150W typical board power at Computex. What hype do you believe you saw from them, honestly? The missing piece was performance, which we didn't know about in full until today. And it's more than just on par with their 28nm. Yes it's not as good as Pascal, but that's not what was promised.
Where is the 2.5x perf/W?
"And, BTW, there's still a lot of optimization going on!"
 
AMD said 150W typical board power at Computex. What hype do you believe you saw from them, honestly? The missing piece was performance, which we didn't know about in full until today. And it's more than just on par with their 28nm. Yes it's not as good as Pascal, but that's not what was promised.

I saw 150W board power, minus the "typical" and a more physical constraint on AMD's usual marketing weaseling with the 6-pin.
Perhaps it is tradition at this point, but I am curious why Furmark is allowed to breach safe power limits. Of the vendors, AMD should have been the one best-positioned to automatically handle this and for far longer.
I'll have to review the circumstances where the card is may be violating specs to see if there's more to it.
Some of the other measurements like the poorer idle power consumption and minor overclocks really ramping power consumption make me wonder if there is something else at play.

Much of AMD's presentation for physical implementation are nice, and were nice when a lot of them were used for Fury and Carrizo. I can give AMD some polite applause that they managed to port that to 14nm, where this might provide something salvageable in the mobile form factor. Other than that, I'm not sure Polaris isn't a regression in some ways with GCN looking to be pushed just as far past its comfort zone or further than it did on 28nm.
 
Show me the slide that's driven your disappointment so we can figure out where AMD went wrong.
The slide hasn't driven my disappointment; the power consumption of the card has.

Just study this:
https://www.techpowerup.com/reviews/AMD/RX_480/25.html

Certainly, they made some small progress, but the card is on par with 980s and the Fury (28nm chips). Not only that but look at relative increases compared to the 1080/70... With basic addition/subtraction, one can see the relative margins for the new cards is actually considerably larger for Nvidia (they have made more progress). But with a slightly higher understanding of mathematics, one can understand that this is even more impressive than the raw numbers would suggest. The upshot being that, while AMD has indeed made progress, they have in fact fallen considerably further behind Nvidia.
 
Last edited:
Rys is right, the majority of the hype did come from sites like Videocardz and WCCFTech and the people that inevitably proliferated those links all over the internet. However, AMD didn't help their cause by advertising a "historical leap in performance per watt for Radeon GPUs".

Btw, was the whole uprising / VR for everyone campaign a leak? Can't find any direct reference to it from AMD.
 
Honestly, I'm not impressed because they've given almost no reason for current AMD owners to upgrade unless current owners have a game or application that benefits from Polaris. It's otherwise just a good card for people who have a low end card and want to upgrade.

That's fine, but they basically left anyone who is middle to high end and above waiting. Combine that with the 2.8x figure that wound up flat out being not true in practice across the entire GPU(Probably is true in individual transistors)....
 
But with a slightly higher understanding of mathematics, one can understand that this is even more impressive than the raw numbers would suggest. The upshot being that, while AMD has indeed made progress, they have in fact fallen considerably further behind Nvidia.
Yes, I agree that they're now further behind. They have been behind for years. The fallacy is expecting them to catch up, and then feigning disappointment when it inevitably doesn't happen. Everything we discuss around here and have done for 15 odd years should have led us to something better.

The question is whether RX 480 is a bad product unworthy of the asking price. That you can even buy it at that asking price is worth a small cheer, given the FE bullshit. I contend the answer is no, and that it's worth the money at the perf/W and perf/$, and that AMD might even make some money. That's the thing that'll help them stop slipping further behind. A dead AMD is not anything we should be interested in.
 
Status
Not open for further replies.
Back
Top