AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
You can't prove the non-existance of something with one sample. Especially when it could be a failure that could only show up after a while.

Maybe. But who enabled them?
If the 480 pulled all 200W or more through the PCIE slot maybe there is an argument something might happen. As for enabling them whoever sold them crappy components in the first place. I really don't see the mobo being the point of failure here. Something else will explode for any number of reasons long before that happens.
 
If the 480 pulled all 200W or more through the PCIE slot maybe there is an argument something might happen. As for enabling them whoever sold them crappy components in the first place. I really don't see the mobo being the point of failure here. Something else will explode for any number of reasons long before that happens.
It's really simple. The spec exists for a reason. Violating the spec is a problem. If it isn't a problem why doesn't everyone do it? I'm quite sure NVIDIA didn't want a 6pin connector on the reference 950 but they put it there because if they didn't the card would be out of spec.
 
Note that the "TBD" in the 4 GB/1.35v row is following a /. My interpretation would be, that 8 gbps at 1.35v is still TBD, while 7 gbps is available.
Ah yes you're probably right. Albeit even if it's available, I don't think there really are any desktop graphic cards which use 1.35V gddr5 - I have no idea how much more the faster (or lower voltage, which is the same) variants cost but it looks like it isn't deemed worthwile (obviously for notebooks that's an entirely different story...).
 
It's really simple. The spec exists for a reason. Violating the spec is a problem. If it isn't a problem why doesn't everyone do it? I'm quite sure NVIDIA didn't want a 6pin connector on the reference 950 but they put it there because if they didn't the card would be out of spec.
There is a reason, but thinking the hardware will suddenly become damaged or explode because the specs are slightly exceeded is alarmist. AMD should fix the issue, but you won't see fires popping up all over the place.
 
If the 480 pulled all 200W or more through the PCIE slot maybe there is an argument something might happen.
It doesn't matter. The story has nothing to do with the practical outcome.

As for enabling them whoever sold them crappy components in the first place. I really don't see the mobo being the point of failure here. Something else will explode for any number of reasons long before that happens.
AMD is the enabler.

I'm willing to bet good money that somewhere on their servers there's an email by a now vindicated but very frustrated engineer that says "we're pushing the clocks too hard and we're violating the PCIe spec." And that somebody overruled him by using exactly the same argument as you, only looking at the real consequences and ignoring public perception.
 
There is a reason, but thinking the hardware will suddenly become damaged or explode because the specs are slightly exceeded is alarmist. AMD should fix the issue, but you won't see fires popping up all over the place.
The primary issue is that if someone's mobo blows up with an RX480 installed AMD is going to be on the hook regardless of whether they are actually at fault. Since they are clearly out of spec and everyone knows it. I can hear the RMA calls now: "what GPU did you have installed at the time of failure?" Not to mention the public perception issue noted above.
 
Its simple either you follow the specs or things may happen..... We don't know if it will happen immediately, if it may happen in the near future or distant future. The possibility of something happening is there.

Even when everything is in spec the average motherboard failure rate that are under warranty are around 3%ish. So going out of spec can increase.

So if something is out of spec, this is why OEM's and system builders can't warranty them because the manufacture of the motherboard won't honor the warranties.

Its the same reason as an overclocker voids the warranty, in this case the card in reference will do the same.
 
I'd say it's just about guaranteed that in a year from now we won't have a single mobo failure attributed to a melting PCIE slot without some serious unrelated failure. That's simply not the part that will fail with or without the spec being adhered to. I'm sure one of these reviewers has had furmark running all weekend and I'm yet to see any articles. If they've sold thousands of cards so far surely someone with a weak system has melted one by now. Overclockers should easily damage it if it's really an issue. This isn't the sort of thing that will slowly get worse over time.
 
I'd say it's just about guaranteed that in a year from now we won't have a single mobo failure attributed to a melting PCIE slot without some serious unrelated failure. That's simply not the part that will fail with or without the spec being adhered to. I'm sure one of these reviewers has had furmark running all weekend and I'm yet to see any articles. If they've sold thousands of cards so far surely someone with a weak system has melted one by now. Overclockers should easily damage it if it's really an issue. This isn't the sort of thing that will slowly get worse over time.
I disagree. The early adopters are the ones with at least half decent systems. It's when the stock becomes constant and the card is far more mainstream that there might be issues. Yes people really do add GPUs to their basic Dell system. According to you those are below spec and in fact in your mind there are only below spec hardware and above spec hardware, there is nothing that actually reliably meets the PCIe spec.
 
I'd say it's just about guaranteed that in a year from now we won't have a single mobo failure attributed to a melting PCIE slot without some serious unrelated failure. That's simply not the part that will fail with or without the spec being adhered to. I'm sure one of these reviewers has had furmark running all weekend and I'm yet to see any articles. If they've sold thousands of cards so far surely someone with a weak system has melted one by now. Overclockers should easily damage it if it's really an issue. This isn't the sort of thing that will slowly get worse over time.

Look you seem like a fairly competent guy, just take any wire and check the specs and put a current through it with higher than speced amps, volts and wattage, and tell me what happens. Run it till the wire fails.

Its not that it can't or can happen, it will happen eventually. Ask anyone that works in electronics they will tell you the same thing.
 
So someone needs to take a cheap PC that doesn't meet specs and throw another product that may not meet specs into it at which point it's still likely to fail the same way it would under standard operation. Any failure case will be some guy using a $20 power supply whose mobo must have failed while he overclocked his 480. The only people pushing this are a bunch of alarmists trying to score marketing points or clickbait.

I mostly agree with you, but considering the GTX 970 is frying some MB PCIE slots shows that the potential exists. Especially as the Rx 480 is pulling more power from the slot than the GTX 970.

That likely comes down to a cheaply made MB or a defective PCIE slot. But the problem is that regardless of whether it is the Rx 480's fault, it will be attributed to the Rx 480 due to the bad press it is currently receiving. Bad press that is warranted to some degree due the card being more out of spec than other graphics cards wrt. power draw from the PCIE slot.

Heck, when one of Nvidia's drivers caused some of their graphics cards to fry MB's the blame was correctly attributed to Nvidia. However, I don't remember seeing the media frenzy and outrage over that which we now see for the Rx 480. It's a problem certainly. And one that AMD needs to address. And I certainly agree with you on the alarmist "the sky is falling" message a lot of sites and users are running with right now. But denying a problem exists is almost as bad.

Regards,
SB
 
Heck, when one of Nvidia's drivers caused some of their graphics cards to fry MB's the blame was correctly attributed to Nvidia. However, I don't remember seeing the media frenzy and outrage over that which we now see for the Rx 480. It's a problem certainly. And one that AMD needs to address. And I certainly agree with you on the alarmist "the sky is falling" message a lot of sites and users are running with right now. But denying a problem exists is almost as bad.

I concur. Let me note, though, that Nvidia pulled the driver and replaced it with a new one where the problem was adressed. As soon as that happens for RX 480, my guess is that the commontion will be quieting down quickly.
 
Look you seem like a fairly competent guy, just take any wire and check the specs and put a current through it with higher than speced amps, volts and wattage, and tell me what happens. Run it till the wire fails.

Its not that it can't or can happen, it will happen eventually. Ask anyone that works in electronics they will tell you the same thing.
As someone who spent a few semesters assisting college freshmen in wiring up digital circuits, amplifiers, and power supplies I'm yet to see the wire be the point of failure in those scenarios. I've seen caps fly, ICs melt, even rolling flames, but never the wire excluding metal fatigue/breakage. The only exception to that was a demonstration of what not to do with high voltage industrial transformers.
 
I concur. Let me note, though, that Nvidia pulled the driver and replaced it with a new one where the problem was adressed. As soon as that happens for RX 480, my guess is that the commontion will be quieting down quickly.

That's my hope as well. I remain cynical, however. Less so on AMD addressing the issue (they have to at this point, even if it's a recall) than on the public suddenly forgetting about it and forgiving AMD as they did Nvidia with the driver situation.

Regards,
SB
 
As someone who spent a few semesters assisting college freshmen in wiring up digital circuits, amplifiers, and power supplies I'm yet to see the wire be the point of failure in those scenarios. I've seen caps fly, ICs melt, even rolling flames, but never the wire excluding metal fatigue/breakage. The only exception to that was a demonstration of what not to do with high voltage industrial transformers.


So two wires connected together through a connector doesn't fit one of those scenarios?

Ask your professors see what they say. hmm, you are arguing something that makes no sense, if you were to get specs to make something, would you go out of spec to make it and then expect others to expect it to work but if it hurts their parts, would you think they would say its problem on their end our your end?

Any angle you look it this, its AMD's issue and its their ass on the line. They need to fix it.

It shouldn't even be a discussion topic.
 
As someone who spent a few semesters assisting college freshmen in wiring up digital circuits, amplifiers, and power supplies I'm yet to see the wire be the point of failure in those scenarios. I've seen caps fly, ICs melt, even rolling flames, but never the wire excluding metal fatigue/breakage. The only exception to that was a demonstration of what not to do with high voltage industrial transformers.
But the PCIe slot is not really wires in terms of the V contact in the sense you see with other Molex-ATX connectors.
Cheers
 
So two wires connected together through a connector doesn't fit one of those scenarios?
Nope. I've seen components melt breadboards, which are the same material, but never the wires or connectors. There simply isn't enough resistance to generate the heat required before something else inevitably fails. Any scenario that pulls enough current to damage a wire or connector will have tripped any current protection circuit long before that occurred.
 
I think the first mismatch with DX12, maybe due to Mantle was to think " free performance gain" ( but who care of winning 10fps here and there ), what was missing is the "use of this effiency" for include new features ( as you cite AI ), offcourse, when you see the result of some actual and the state of the gaming industry whoo is largely dominated by immediate profit ( banking as fast they can ), its a bit so.. well confusing right now.

The problem is allways the same, games, and specificaly engine for run the games, take time to be developped, and you can " port " them to a different engine ( DX11>DX12) but this surely just be the same engine. when used in a short time ..
Today's DX12 roster of games can hardly be a point in favor of AMD, first of all there are so few of them (6 maybe?), secondly of these few, 3 of them are actually in AMD's favor, the others are in NVIDIA's favor. So it's draw basically. Two of those games were AMD optimized titles, so AMD's advantage in them is hardly due to their superior arch or because their arch is better suited to DX12. It's like saying NV's arch is superior based on their PhysX or GameWorks titles!

If we follow history, AMD was first with DX10.1 and DX11 and Mantle, and it hardly made a difference to their current status or product attractiveness. In fact NV put it's weight behind Tessellation (a sub feature of DX11) and embarrassed AMD with it. then continued to embarrass AMD with better driver overhead in DX11. So who knows what strategy NV could come up with to deal with DX12 this time around, with their vastly superior funds and developer relations? Even Mantle flopped and was quickly forgotten. New AMD cards can't even run Mantle well!

The Future is full of uncertainty, and the data we have now are not in favor of AMD nor NV, It's a battle that will be decided for years to come. Anyone who says otherwise is probably wishfully thinking a little too much.
 
Status
Not open for further replies.
Back
Top