AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Then again, Computerbase cut their consumption by 33W and increased performance by ~5'ish % by undervolting their sample
Look closer. They saved 1w vs the default settings and gained about 5% performance.

https://www.computerbase.de/2016-06/radeon-rx-480-test/12/


EDIT: 5% free performance is still very good, the problem is it probably isn't always free. If you get a card that is stable with the reduced voltage great, if not...
 
Last edited:
Then again, Computerbase cut their consumption by 33W and increased performance by ~5'ish % by undervolting their sample
Yeah and this is an area for discussion (nicely picked up by another member earlier on), because the voltage and power targets are not aligned correctly, to get the true performance of default voltages you need to raise the power target, which also causes power consumption to raise quite a lot.

The 480 should had been shipped with a lower voltage profile to reflect the default power target IMO.
They seem to been caught between two different decisions of what was the default performance-consumption window they were targetting.
Although the fps and watts gain-loss was still low, because ideally from purely a performance perspective it should be the power target increased and the voltage left default, but the power consumption and TDP would be unacceptable IMO.
Cheers
 
Then again, Computerbase cut their consumption by 33W and increased performance by ~5'ish % by undervolting their sample

I really can't believe how bad AMD marketing is...As i said cards for reviews and even more for launch day reviews need to be the best of the best and yet AMD ships cards without testing to most of the major sites. It would even be better is every single card shipped had a different voltage (the minimum possible) and say it was all of their voltage optimization magic and people could actually believe them, instead they just shipped like random cards and hoped for the best?

From the PCIe x16 slot perspective, a big consideration is whether the manufacturer implemented the High Current Series specification regarding the 24-pin connector and the 2x12V contacts.
This really is only a consideration if what Tom's Hardware identified is universal, and critically one is looking to go mGPU with the 480 or overclocking - just reiterating because this is a sensitive subject until fully clarified and validated.

Regarding the 8-pin PEG, well on its own it is not enough as AMD will need to work on the power distribution of the 480.
As an example the 1080FE pulls a peak burst of 60W and average 40W through the mainboard connector, with most of the "pressure" put onto the PEG connector.
Cheers

It would be interesting to know how many reviewers used a budget motherboard and PSU when testing the 480, along with OCing.

Yes, all of this is about budget equipment because well its a budged card. We wouldn't be having this conversation if this was a high end 600 dollars card, but it isn't. Reviewers PC are hign-end so they don't risk anything but if the average Bob buys this card and place in it some Biostar board and OC it and then: can heard the interference in his speakers, the PC restarts randomly or just blow up his 24 pin in his board taking both the board and the power supply then its AMD responsibility.
 
Async shaders are still mostly an unused feature on PC. People are experimenting with it. Console developers have used async shaders for a long time already and gained significant understanding about it. Async shader perf gains depends heavily on the developers ability to pair correct shaders together (different bottlenecks) and tuning the scheduling + CU/wave reservation parameters (to prevent starving and other issues). Manual tuning for each HW configuration is not going to be acceptable on PC development. GPU should dynamically react to scheduling bottlenecks. GCN4 saves lots of developer work and likely results in slightly better performance than statically hand tuned async shader code.
Reserving CUs would technically reduce the GPU's ability to dynamically schedule, but would go some way in addressing one of the base latency problems Sony found when evaluating using HSA for audio purposes.
Low demand periods for the high-priority workloads would lead to underutilization. This seems to coincide with the subsuming of the Tensilica block for TrueAudio, which I'm seeing some hints elsewhere as having relevance for that platform holder. It might matter for Scorpio as well, since the value-add of more extensively customized hardware in Durango was seriously reduced.

Preemption for compute and graphics and the hardware virtualization brought in by GCN3 also might help conform with the partitioned systems and handling time slices and long-running kernels.

Register pressure is another bottleneck of GCN architecture. It's been discussed in many presentation since the current console gen launch. Fp16/int16 are great ways to reduce this bottleneck. GCN3 already introduced fp16/int16, but only for APUs. AMD marking slides state that GCN4 adds fp16/int16 for discrete GPUs (http://images.anandtech.com/doci/10446/P3.png?_ga=1.18704828.484432542.1449038245).
I suppose we'll see mention of the difference in the GCN4 ISA document. GCN3's ISA even gives the ability to run with 8-bit granularity as well as 16-bit, but I'm not sure if there's something to the clamping behavior or encoding changes that make the GCN4 version "new".
On the other hand, various other "new" features given for Polaris are not so new.

Console devs have also lately started to discuss methods to sidestep GCNs weak geometry pipeline. My SIGGRAPH presentation and Graham's GDC presentation for example give some good points. Graham's GDC presentation is a must read: http://www.frostbite.com/2016/03/optimizing-the-graphics-pipeline-with-compute/ . On GCN2 it is actually profitable to software emulate the GCN4 "primitive culling accelerator". This obviously costs a lot of shader cycles, but still results in win. I am glad to see that GCN4 handles most of this busy work more efficiently with fixed function hardware.
GCN's geometry pain point came up at PS4 launch with Mark Cerny discussing using compute to perform a stripped-down version of a vertex shader, and using the then new ability to use compute to serve as a front-end.
 
[
Nice try... reversing burden of proof. :D

Wait a sec, I thought the burden of proof is on the one who claims RX480 can damage a mobo . Wasn't Tom's review saying something along the lines of 'it's so bad we're afraid our high end mobo may blow up so we won't risk it". Well that's great, get a 30 EUR mobo and test it. If it blows up you just uncovered something equivalent to VW emission specs cheating in GPU space. Surely a massive recall would follow. Might even destroy AMD.
 
Wait a sec, I thought the burden of proof is on the one who claims RX480 can damage a mobo . Wasn't Tom's review saying something along the lines of 'it's so bad we're afraid our high end mobo may blow up so we won't risk it". Well that's great, get a 30 EUR mobo and test it. If it blows up you just uncovered something equivalent to VW emission specs cheating in GPU space. Surely a massive recall would follow. Might even destroy AMD.

There are 7 websites and people that have stated this so far two of them on retail boards, and its early enough to address it quickly if there is a real problem that it won't affect AMD much if at all.
 
[

Wait a sec, I thought the burden of proof is on the one who claims RX480 can damage a mobo . Wasn't Tom's review saying something along the lines of 'it's so bad we're afraid our high end mobo may blow up so we won't risk it". Well that's great, get a 30 EUR mobo and test it. If it blows up you just uncovered something equivalent to VW emission specs cheating in GPU space. Surely a massive recall would follow. Might even destroy AMD.
That VW was actually huge and the decision made by the EU completely $hit. the NOx is highly cancerous.

To the topic: you don't need the sarcasm, we have the numbers here(or at least part of them) it is not a mere opinion, it is being argue with facts.
 
I'm guessing transistors need more voltage as they degrade and less voltage when they are new. Without adaptive compensation you have to take that into account. So you save couple of % in power when GPU is new.
Ah, yes, that makes more sense.
 
Regarding the 8-pin PEG, well on its own it is not enough as AMD will need to work on the power distribution of the 480.
As an example the 1080FE pulls a peak burst of 60W and average 40W through the mainboard connector, with most of the "pressure" put onto the PEG connector.
How does the distribution of load between different power inputs work?
Do they have, say, 2 VRMs on one and 2 on the other?
 
What one-time calibration are we talking about...?
As noted, there's an on-boot run of the GPU's initial factory characterization program.
I'm wondering if there's something about the test setup or the particular systems or motherboards that could be causing the board to settle on worse parameters.

So much optimization on the power delivery yet it only manage to be in pair with the 970...Is 14mn GloFo so bad?
Maybe? Some of the voltage tweaking or places where tiny overclocks blow way past the power limit make it look like some of these power management methods are not working as intended.
It's definitely non-trivial to get all of them right, but GCN has historically been generally much better about this.

About this adaptive age compensation: does it mean that the GPU gets slower over time?
I think it might happen, although on the other hand it might mean a board can go longer before getting flaky or bricking.

As for all this controversy around PCI specs. Why don't all this sites simply test their theory and grab a cheap ass mobo and try to blow it. Because this is what everyone is afraid right? So either test and prove your theory or stfu.
It might cause a cheap motherboard to bulge a capacitor or become somewhat unstable in 2 years, various components have margins and methods to try and survive out of spec behavior--possibly not forever. In an OEM system, it might mean that claims under warranty might eventually have the blame for a problem of indeterminate origin put on the one device not playing by the rules.
 
Wait a sec, I thought the burden of proof is on the one who claims RX480 can damage a mobo . Wasn't Tom's review saying something along the lines of 'it's so bad we're afraid our high end mobo may blow up so we won't risk it". Well that's great, get a 30 EUR mobo and test it. If it blows up you just uncovered something equivalent to VW emission specs cheating in GPU space. Surely a massive recall would follow. Might even destroy AMD.

No, the issue is a legal one of both liability and making false claims. Regardless of the actual safety of the matter, being out of spec is not good. Being out of spec and claiming you are in spec is even worse... If AMD is smart they will just voluntarily remove the PCI-E sticker (or black it out) on affected products until the issue is resolved (or not). Hell most people will never even notice or care about the difference, and if something does go wrong well it is caveat emptor. OFC, AMD does not want to do this because to do so would be a concession, but I can't see the alternatives being any better...

Furthermore, think about what you are saying. Yes, I can go out and buy a cheap motherboard and an RX480 and perform a single test. If it passes, that is great... for me. It is meaningless for everyone else. The idea that a single non-failure is somehow a validation is so deranged I can't even... Although, that might explain how it got "certified" in the first place... SMH.

The sad part is this is a controversy which has no right to exist. Smart people work at AMD. Multiple people had to at least think 110w gpu... 30-40w memory... 150w spec... something does not compute.
 
To the topic: you don't need the sarcasm, we have the numbers here(or at least part of them) it is not a mere opinion, it is being argue with facts.
Actually, the statement that it can lead to blowing up mainboards is very far from an established fact :rolleyes:. It's pure conjecture at this point. If someone wants to show it is a real danger, he has to test it, preferably on the cheapest, lowest quality mainboard he can find.
Btw., it doesn't work the other way around. One can never prove, that it is not going to happen. Adhering to some spec is somewhat reassuring, but that was not the question.
 
Actually, the statement that it can lead to blowing up mainboards is very far from an established fact :rolleyes:. It's pure conjecture at this point. If someone wants to show it is a real danger, he has to test it, preferably on the cheapest, lowest quality mainboard he can find.
Btw., it doesn't work the other way around. One can never prove, that it is not going to happen. Adhering to some spec is somewhat reassuring, but that was not the question.

Using out-of-spec parts is just asking for trouble. There is a reason the specs exist in the first place and that is to be sure all parts work together flawlessly.
 
Actually, the statement that it can lead to blowing up mainboards is very far from an established fact :rolleyes:. It's pure conjecture at this point. If someone wants to show, it is a real danger, he has to test it, preferably on the cheapest, lowest quality mainboard he can find.
Well that's why I said part of them. But even if you buy the board and test it as explained by the laast comment is not a validation. It can create a failure over time but not immediately.

This is important not because it would happen but because it can happen. Same thing as why you can't use your cellphone in a plane.

Again is a big irresponsibility from amd to expose their customers to this just because they wanted to look better in the leaked pictures before the reviews. Just use the damn 8 pin or at least make the card sock more from the 6 pin(although they would still be out of specification).

Enviado desde mi HTC One mediante Tapatalk
 
Using out-of-spec parts is just asking for trouble. There is a reason the specs exist in the first place and that is to be sure all parts work together flawlessly.


Well the specs were put together by the very companies that makes and sells the products and systems, they know what is best to keep their business stable from a warranty stand point. Its all about the money at the end.
 
How does the distribution of load between different power inputs work?
Do they have, say, 2 VRMs on one and 2 on the other?
No simple answer to that as it would involve guideline for power supply/power distribution configuration from the ATX12V publications and power supply management solution and circuitry with software/firmware.
There are also some 3rd party manufacturers that provide these solutions for GPU vendors/IHV.
Cheers
 
If for some strange reason their card indeed consumes only this much (and this is such a huge if, it's very hard to believe) that would go quite some way to explain the better than average results, as the GPU would always stay at the highest clock speed of 1266MHz. Other sites have tested the performance with maximized power target (so it has the same effect on clock speed) to be almost 10% higher. But this comes at the price of an average power consumption of 180+W or so. There is a pretty high chance they screwed up something. That they somehow got a golden sample appears quite unlikely.

Does that suggest that most reviewers can't keep their cards at 1266Mhz, and the performance being measured is more indicative of base clock than boost clock?

If so wouldn't that mean an AIB that could sustain 1600Mhz would show stupendous results? In some games running circles around fury x?
 
Using out-of-spec parts is just asking for trouble. There is a reason the specs exist in the first place and that is to be sure all parts work together flawlessly.
Of course. But stepping a bit outside of some specs doesn't mean automatically, that there will be trouble. And the question was, will there be some problems?
We all know the 6pin PCIe plugs are good for only 75W according to spec. 8pin plugs are good for 150W, even if the amount of 12V conductors is exactly the same. Forgetting the PCIe or ATX spec for a moment and looking just at the ratings of the actual plugs, one learns it shouldn't be much of a problem to supply (way) more than 200W through a 6pin plug (if the power supply is built to deliver that much). So can we expect problems of burnt 6 pin plugs if a graphics card draws more than 75W through that plug? Very likely not. And the same can be very well true for the delivery through the PCIe slot. I agree maybe one shouldn't try to built a 3 or 4way crossfire system (and overclock the GPU on top of it) and expect no problems as it may strain the power delivery on the board. But as someone has explained already, it may even work without damaging the board (as the card starts to draw more through the 6pin plugs). But this is also just conjecture. We just don't know at this point.
And as one usually can't do a negative proof (that's a general thing), the only way to show something actually will cause problems is to demonstrate them and try to narrow down the circumstances under which this will occur.
But even if you buy the board and test it as explained by the laast comment is not a validation.
Of course it is not. Absence of evidence isn't evidence of absence. But without evidence for the presence of problems, you just can't decide the question.

OT:
This is important not because it would happen but because it can happen. Same thing as why you can't use your cellphone in a plane.
Actually, some airlines allow the use of cellphones on a plane (a few even during the start and landing phase, i.e. without any restrictions).
 
Does that suggest that most reviewers can't keep their cards at 1266Mhz, and the performance being measured is more indicative of base clock than boost clock?
In some sense yes. Some reviews even stated the frequencies they are usually seeing. The general trend is that in low resolutions, the clock will be slightly below 1.2Ghz (average over several games, of course there are large deviations between individual ones), but in higher resolutions it tends to hover around 1.15Ghz or even below that. That's basically the reason why an inreased power target also increases the performance significantly (~9% on average in one German review linked here before). And it is also the reason, why undervolting (and keeping the power target constant) also increases performance by several percent (about 5% as average over all tested games in the same review). The power usage is limited to a certain value and with lower voltages (and therefore lower power consumption at the same frequency) it will boost higher on average. The same happens if you increase the power limit the card is allowed to use (which will result in 185W on average or so according to some tests, peaks can be easily above 200W).
That's just how the Boost or Powertune features of nV and AMD work nowadays.
 
Status
Not open for further replies.
Back
Top