AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
The Stilt showed how to do it with MSI Afterburner and commandline, and also has provided custom BIOS with the same "hack" built-in so you don't need Afterburner to do it
Yes. That's what so surprising to me. I don't follow the overclocking scene at all, and, up to now, thought they were using sanctioned APIs for overclocking or HW I2C hacks, not SW that can directly access I2C ports to critical components.

Are there similar tools for Nvidia?
 
The official response was reasonably well done. I don't see the 6-pin ever being an issue, especially with the way they designed it. Hopefully they update (or stop production on) the reference board ASAP.
 
Perhaps there are hardware safeguards in place? It could indeed be problematic if it were possible to set the GPU's voltage to 3V.
 
The Stilt showed how to do it with MSI Afterburner and commandline, and also has provided custom BIOS with the same "hack" built-in so you don't need Afterburner to do it
The BIOS from last I read was not proved to be working, possibly due to signature check for those specific changes.

Cheers

Edit:
Yep looks to be proved to be signature secured:
Ok, that settles it then.
They are signature protected and cannot be modified unless the bios is re-signed at AMD server. Business as usual.
http://www.overclock.net/t/1604979/...r-the-reference-rx-480-cards/90#post_25322892
 
Last edited:
In the Elmor thing, they use a HW I2C master, anything is fair game there. But I'm talking specifically of pure SW access: I thought that critical power control infrastructure would be protected by requiring signed binaries or cookies to avoid tampering.


Good point. I thought that those has to be signed as well. ;)
Ah K,
I get where you are coming from.
Yeah interesting point, well thankfully virus writers are more interested in hooking into the bios/uefi type environment.

And TBH I was surprised it works via MSI Afterburner (still madness to try that for the average person IMO), I thought it would had to be with something like Elmor EVC for the reason you say.
Cheers

Edit:
Ironically it seems that AMD has protected this environment with the driver-BIOS, when I checked to see status for a custom BIOS, they prove it is not loading due to requiring signature check.
http://www.overclock.net/t/1604979/...r-the-reference-rx-480-cards/90#post_25322892
 
Last edited:
It's not just Kepler vs GCN2, it holds true on earlier generations too, I don't see any reason why Maxwell would suddenly change this
More unfounded statements, Kepler was the only DX11 Gen that had this happened to it. In fact Kepler aged fine for it's design, it's the GCN cards that caught up later.

Speaking of which, what about all the DX11 cards AMD dropped support for? all those HD 5xxx, 6xxx and even some 7xxx? how would this affect these products longevity, bug squashing and future proofing? What happenes if AMD transitiones from GCN completely down the road? are we allowed to think all the support for these GCN generations will be dropped like bad weight? Of course I don't need to bring bad DX11 overhead which affects current GCN cards (vs their Maxwell counterparts) into the discussion again.

They're still developing it in private as a platform to test new things on. At the beginning they said their plan is to get "Mantle or mantle-like (low level) API" wide adoption, now we have Mantle-like (low level) DX12 and Mantle-fork Vulkan, how is that not a success?
More PR saveface progpaganda! Mantle is not a success because now it's not AMD alone controlling it's similar APIs, other vendors are. Now the success of these APIs will not be attributed to AMD alone, but to it's competitors as well and that won't translate to more sales for them, which would have happened if Mantle initiative took off. Mantle is not a success because of all the games that promised to have it, and then dropped support for it immediately, most of these games don't even have DX12 or Vulkan support! (StarWars, NFS, Mirror's Edge) they just dropped Mantle like a bad weight.
 
Mantle was definitely a big success for AMD since it has heavily influenced DX12 and Vulkan. I'd argue it's in fact a good thing for AMD that Mantle has never seen the light of the day as it eliminates for them having to support yet another API and therefore save money. DX12 and Vulkan carry the Mantle torch now.
 
Once you have I2C access to a voltage regulator, there's no more safeguards. Unless they are in the regulator itself, and there's a dead-man switch register to lock in that safeguard.
Actually there can be some additional safeguards. One can configure the registers of the IR3567B (each one individually) as read only or even block access completely. To gain access to these protected registers, a "password" is required (which can be changed 8 times by programming it to non-volatile memory within the controller). But to my understanding, this protection doesn't appear to be very strong. It's short enough to simply try all possibilities in a reasonable timeframe, if there isn't some limitation for the amount of tries you have (which the datasheet fails to mention).
But at least the registers controlling the load balancing between the phases are obviously unprotected in case of the RX480, so this option was not used.
 
Last edited:
More PR saveface progpaganda! Mantle is not a success because now it's not AMD alone controlling it's similar APIs, other vendors are. Now the success of these APIs will not be attributed to AMD alone, but to it's competitors as well and that won't translate to more sales for them, which would have happened if Mantle initiative took off. Mantle is not a success because of all the games that promised to have it, and then dropped support for it immediately, most of these games don't even have DX12 or Vulkan support! (StarWars, NFS, Mirror's Edge) they just dropped Mantle like a bad weight.

It's definitely a huge victory for them as it moves games rendering away from Dx11 and OpenGL which weren't well suited to their hardware. Dx12 and Vulcan in particular, as it is Mantle with a different name, have successfully moved things such that the rendering APIs of the future are far more suited to their hardware. Just the fact that the new Doom is using Vulcan (Mantle) rather than OpenGL is a huge win for AMD. The fact that Nvidia are forced to support Vulcan (Mantle) is a huge win.

It was inevitable that Mantle became an API that was controlled by committee if was ever to fulfill it's promise of being open to all hardware as AMD has always stated was their goal for Mantle. Keeping it away from the being open would have resulted in nothing more than a closed API similar to CUDA, which was not what they stated they wanted for Mantle.

Hence, getting it used by the Khronos group for the gaming oriented successor to OpenGL is basically what AMD wanted when they set out to create Mantle. It is now an industry standard. They are no longer the only group developing it and hence aren't the only group bearing the cost of developing it. They've successfully gotten Nvidia and other industry players to be involved in it.

Every API evolves. The fact that Mantle is called Vulcan now doesn't mean it isn't still Mantle. Just like DirectX changes over time, so did Mantle in becoming the open standard that AMD wanted it to become.

With regards to graphics rendering for games, everyone except for maybe Nvidia, should be ecstatic that AMD were able to get the Kronos group to accept Mantle as the basis for a new rendering API. OpenGL was never going to be able to transition into a proper competitor to DirectX. Not with Nvidia having so much control in dictating how it changed, or in reality, how it couldn't change. Which is good for the large non-gaming 3D accelerated business market, but not good for the advancement as a rendering API for games. Of course, that also benefited them greatly in that space as they could easily block any changes AMD might have wanted implemented that was friendly to their hardware. Vulcan (Mantle) changes all of that.

Regards,
SB
 
It's definitely a huge victory for them as it moves games rendering away from Dx11 and OpenGL which weren't well suited to their hardware. Dx12 and Vulcan in particular, as it is Mantle with a different name, have successfully moved things such that the rendering APIs of the future are far more suited to their hardware. Just the fact that the new Doom is using Vulcan (Mantle) rather than OpenGL is a huge win for AMD. The fact that Nvidia are forced to support Vulcan (Mantle) is a huge win.

It was inevitable that Mantle became an API that was controlled by committee if was ever to fulfill it's promise of being open to all hardware as AMD has always stated was their goal for Mantle. Keeping it away from the being open would have resulted in nothing more than a closed API similar to CUDA, which was not what they stated they wanted for Mantle.

Hence, getting it used by the Khronos group for the gaming oriented successor to OpenGL is basically what AMD wanted when they set out to create Mantle. It is now an industry standard. They are no longer the only group developing it and hence aren't the only group bearing the cost of developing it. They've successfully gotten Nvidia and other industry players to be involved in it.

Every API evolves. The fact that Mantle is called Vulcan now doesn't mean it isn't still Mantle. Just like DirectX changes over time, so did Mantle in becoming the open standard that AMD wanted it to become.

With regards to graphics rendering for games, everyone except for maybe Nvidia, should be ecstatic that AMD were able to get the Kronos group to accept Mantle as the basis for a new rendering API. OpenGL was never going to be able to transition into a proper competitor to DirectX. Not with Nvidia having so much control in dictating how it changed, or in reality, how it couldn't change. Which is good for the large non-gaming 3D accelerated business market, but not good for the advancement as a rendering API for games. Of course, that also benefited them greatly in that space as they could easily block any changes AMD might have wanted implemented that was friendly to their hardware. Vulcan (Mantle) changes all of that.

Regards,
SB

Quite, although I think we are still waiting for the Vulcan implementation of Doom to be released.
 
Quite, although I think we are still waiting for the Vulcan implementation of Doom to be released.

The Vulkan patch is taking a weirdly long time to come.
At first, I thought it would be launched alongside the RX 480, but now it's been over a week and still nothing.
>2 months after release isn't really "shortly after release" anymore.
 
Actually there can be some additional safeguards. One can configure the registers of the IR3567B (each one individually) as read only or even block access completely. To gain access to these protected registers, a "password" is required (which can be changed 8 times by programming it to non-volatile memory within the controller). But to my understanding, this protection doesn't appear to be very strong. It's short enough to simply try all possibilities in a reasonable timeframe, if there isn't some limitation for the amount of tries you have (which the datasheet fails to mention).
But at least the registers controlling the load balancing between the phases are obviously unprotected in case of the RX480, so this option was not used.
Has this ever been protetected on GPUs this way though (although ironically in many cases it is protected via the driver-bios)?
I remember an I2C contoller type solution from Elmor being around since 2013 and working with various GPUs including going back to 7790 .
I appreciate this is also means cards not using IR PWM controllers as well.
Thanks
 
I was surprised the VRMs can be user programmed to change current draw, but it's obviously possible with the user Afterburner demonstration and AMD's upcoming new driver fix. The programmability of VRM behavior seems like a useful and powerful, but dangerous ability to expose without significant BIOS level restrictions. If a software driver can change current draw so significantly, you could imagine a deliberately malicious user tool instructing the RX 480's beefy VRMs to draw maximum current from the PCIE slot until it failed, physically destroying your computer and even potentially starting a fire. All from software. This vulnerablility might have always existed on older GPUs, and it could be on NVidia as well as AMD cards.
 
http://videocardz.com/62009/amd-rel...mson-16-7-1-drivers-fixing-power-distribution
Radeon Software Crimson Edition 16.7.1 Highlights

  • The Radeon™ RX 480’s power distribution has been improved for AMD reference boards, lowering the current drawn from the PCIe bus.
  • A new “compatibility mode” UI toggle has been made available in the Global Settings menu of Radeon Settings. This option is designed to reduce total power with minimal performance impact if end users experience any further issues. This toggle is “off” by default.
  • Performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3% [1]. These optimizations are designed to improve the performance of the Radeon™ RX 480, and should substantially offset the performance impact for users who choose to activate the “compatibility” toggle.
 
I was surprised the VRMs can be user programmed to change current draw, but it's obviously possible with the user Afterburner demonstration and AMD's upcoming new driver fix. The programmability of VRM behavior seems like a useful and powerful, but dangerous ability to expose without significant BIOS level restrictions. If a software driver can change current draw so significantly, you could imagine a deliberately malicious user tool instructing the RX 480's beefy VRMs to draw maximum current from the PCIE slot until it failed, physically destroying your computer and even potentially starting a fire. All from software. This vulnerablility might have always existed on older GPUs, and it could be on NVidia as well as AMD cards.
It is protected in terms of the driver-BIOS and requires digital signing (this case from AMD) on a per parameter access basis, I agree it is surprising and potentially secure issue it can be done via software such as Afterburner, which is not the same integration-access as the I2C Hardware-Software controller solutions brought up a bit earlier like the Elmor eVc (not suggesting this is the best but is a known solution).
But then most 'hackers' would be looking for exploits via the BIOS/firmware-driver environment, including looking for ways to compromise UEFI.
Cheers
 
I don't get it.... The whole point of being in spec on the motherboard side is so you don't take the blame if/when something goes wrong. They improved it, but are still out of spec.... Why wouldn't they just shift more toward the 6-pin... it should be able to take it.
 
I don't get it.... The whole point of being in spec on the motherboard side is so you don't take the blame if/when something goes wrong. They improved it, but are still out of spec.... Why wouldn't they just shift more toward the 6-pin... it should be able to take it.

And achieve what exactly? They are still breaking PCIe connector specs. So how does that solve the 'if something goes wrong' argument? Also I think specs are +-10% or something like that.
 
Status
Not open for further replies.
Back
Top