AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
A guy on another forum already reprogrammed the VRMs towards the 6 pin to remove the perceived issue. I can't imagine AMD are having that much difficulty if a random guy on a forum can do it. Just limit the power a bit with the mechanism already in place and let the AIBs sell factory OC'd boards and the issue that never really existed is gone until someone overclocks their card.
 
A guy on another forum already reprogrammed the VRMs towards the 6 pin to remove the perceived issue. I can't imagine AMD are having that much difficulty if a random guy on a forum can do it. Just limit the power a bit with the mechanism already in place and let the AIBs sell factory OC'd boards and the issue that never really existed is gone until someone overclocks their card.

As for the fix. In the age of programmable power controllers I really fail to see why they can't just reroute power. And break specs via power connector, which is not that bad according to Grall.
 
A guy on another forum already reprogrammed the VRMs towards the 6 pin to remove the perceived issue. I can't imagine AMD are having that much difficulty if a random guy on a forum can do it. Just limit the power a bit with the mechanism already in place and let the AIBs sell factory OC'd boards and the issue that never really existed is gone until someone overclocks their card.

Via the driver? Could you share a link to that?
 
http://www.overclock.net/t/1604979/...lot-power-draw-for-the-reference-rx-480-cards

He used MSI Afterburner, but it's the same way you would adjust voltages. It's just not regularly exposed. They could probably add it to WattMan if they wanted. A BIOS flash would change the default settings, but no reason you can't override them with drivers.
Very cool hack! If one is willing to ignore the security consequences of a virus being able to blow up your GPU... which I totally am. ;)

Is this kind of power regular hacking commonplace? I thought that access to these kind of I2C busses would restricted to some internal controllers that run signed firmware?
 
http://www.overclock.net/t/1604979/...lot-power-draw-for-the-reference-rx-480-cards

He used MSI Afterburner, but it's the same way you would adjust voltages. It's just not regularly exposed. They could probably add it to WattMan if they wanted. A BIOS flash would change the default settings, but no reason you can't override them with drivers.

Cool, thanks for the info. I wish someone at Toms or PcPer could give this a wirl and confirm
how it effects the current draw distribution
 
AMD status update:

The default option sounds a great deal like The Stilt's hack.

AMD said:
We promised an update today (July 5, 2016) following concerns around the Radeon™ RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop a driver update to improve the power draw. We’re pleased to report that this driver—Radeon Software 16.7.1—is now undergoing final testing and will be released to the public in the next 48 hours.

In this driver we’ve implemented a change to address power distribution on the Radeon RX 480 – this change will lower current drawn from the PCIe bus.

Separately, we’ve also included an option to reduce total power with minimal performance impact. Users will find this as the “compatibility” UI toggle in the Global Settings menu of Radeon Settings. This toggle is “off” by default.

Finally, we’ve implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%1. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the “compatibility” toggle.

AMD is committed to delivering high quality and high performance products, and we’ll continue to provide users with more control over their product’s performance and efficiency. We appreciate all the feedback so far, and we’ll continue to bring further performance and performance/W optimizations to the Radeon RX 480.

1: Based on data running ’Total War: Warhammer’, ultra settings, 1080p resolution. Radeon Software 16.6.2 74.2FPS vs Radeon Software 16.7.1 78.3FPS; Metro Last Light, very high settings, 1080p resolution, 80.9FPS vs 82.7 FPS. Witcher 3, Ultra settings, 1440p, 31.5FPS vs 32.5, Far Cry 4, ultra settings, 1440p, 54.65FPS vs 56.38FPS, 3DMark11 Extreme, 22.8 vs 23.7 System config: Core i7-5960X, 16GB DDR4-2666MHz, Gigabyte X99-UD4, Windows 10 64-bit. Performance figures are not average, may vary from run-to-run.
 
So someone needs to take a cheap PC that doesn't meet specs and throw another product that may not meet specs into it at which point it's still likely to fail the same way it would under standard operation. Any failure case will be some guy using a $20 power supply whose mobo must have failed while he overclocked his 480. The only people pushing this are a bunch of alarmists trying to score marketing points or clickbait.

It can easily meet the specs and still be challenged by the RX480. We agree that the RX480 is able to draw around 160W (without OC) on average, we agree that the slot is offering 5,5A and 12V per specifications and that the 6Pin offers 75W. But the specs allow the voltage in the slot +- 8%, so the worst case 11,1V in the slot. With 11,1V you would need 7,2A to draw 80W and that is a 30% more than the allowed 5,5A. And that is without the user playing with Wattman which is part of the official driver package.

Sure this will only cause problems in very rare cases (without OC) but I am having a problem to find any acceptable reason for the decision.If I want users to buy the card who have a system with only 1x6Pin PCIe power connector, it is very likely that it will be cheap systems. Going to 8Pin would have made no practical difference to the card, the difference in price would be not noticeable, but maybe AMD thought that it would not "look" efficient enough. It is not as if they are pushing the specs because they need to, it looks as if they are pushing the specs because they went with the cheap solution.
 
It can easily meet the specs and still be challenged by the RX480. We agree that the RX480 is able to draw around 160W (without OC) on average, we agree that the slot is offering 5,5A and 12V per specifications and that the 6Pin offers 75W. But the specs allow the voltage in the slot +- 8%, so the worst case 11,1V in the slot. With 11,1V you would need 7,2A to draw 80W and that is a 30% more than the allowed 5,5A. And that is without the user playing with Wattman which is part of the official driver package.

Sure this will only cause problems in very rare cases (without OC) but I am having a problem to find any acceptable reason for the decision.If I want users to buy the card who have a system with only 1x6Pin PCIe power connector, it is very likely that it will be cheap systems. Going to 8Pin would have made no practical difference to the card, the difference in price would be not noticeable, but maybe AMD thought that it would not "look" efficient enough. It is not as if they are pushing the specs because they need to, it looks as if they are pushing the specs because they went with the cheap solution.
In all likelyhood it was a mistake/bug. As has been pointed out repeatedly, AMDs other cards do not show this behaviour. They are fixing it now with a software update.
Find something else to complain about. May I suggest the power draw?
 
Last edited:
A rather unfounded observation, specifically with the Maxwell generation, one which amounted to what basically? nothing! According to steam charts GTX 970 is the most widely used GPU on the planet.

It's true the Kepler generation takes a bigger hit in contemporary games than the Hawaii generation. Due to a multitude of factors, memory shortage being one of them, an inherent weakness in compute, etc. However that doesn't exclude AMD cards from similar problems, one relating to higher CPU overhead, which makes them unfavorable when couple with low to mid range CPUs, like Core i3 for example. which pretty much makes the entire AMD med end GPUs less favorable when compared to NV's Maxwell counterparts. Unless you have a good CPU of course, which is not a common practice among buyers of this class of GPUs.
It's not just Kepler vs GCN2, it holds true on earlier generations too, I don't see any reason why Maxwell would suddenly change this
That's just marketing BS, AMD wanted Mantle to expand their marketshare, and to become more popular. it's why they incorporated it in several games, and announced more games to come. That's why they had plans to develop more versions of it. But in the end, it didn't ban out and developers simply dropped it even from the games that promised to have it. Because as you said they lacked the market penetration to do it and the resources to keep developing for it with each architecture iteration. Besides their lacking DX11 performance didn't help them either.
They're still developing it in private as a platform to test new things on. At the beginning they said their plan is to get "Mantle or mantle-like (low level) API" wide adoption, now we have Mantle-like (low level) DX12 and Mantle-fork Vulkan, how is that not a success?
 
Very cool hack! If one is willing to ignore the security consequences of a virus being able to blow up your GPU... which I totally am. ;)

Is this kind of power regular hacking commonplace? I thought that access to these kind of I2C busses would restricted to some internal controllers that run signed firmware?
Well the hack is programming/changing parameters in the PWM Controller.
In the 1500MHz overclock I linked earlier, they also had to do a similar thing to change power related power settings, they used a nice tool called Elmor EVC, basically very similar but has both hardware+software interface with I2C to the PWM controller so one has much better flexibility over the options.

It is pretty standard communication, and I would say the bios viruses are more likely an issue IMO.
Cheers
 
So, Fermi vs. Evergreen/Cayman? Don't think so. ;)
I need to dig around a bit, but whenever someone has released a benchmark showing some of the older cards, AMD ones have been doing relatively better than their NVIDIA counterparts of the day
 
Separately, we’ve also included an option to reduce total power with minimal performance impact. Users will find this as the “compatibility” UI toggle in the Global Settings menu of Radeon Settings. This toggle is “off” by default.

Wouldn't it make more sense to have the toggle "on" by default to reduce total power?
 
Let's just hope this doesn't happen again ...



https://bitcointalk.org/index.php?topic=1433925.msg15438647#msg15438647
 
Status
Not open for further replies.
Back
Top