AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Low stock and high demand. Ethereum Miners probably bought a crapload and they will probably burn out their motherboards mining.
 
Low stock and high demand. Ethereum Miners probably bought a crapload and they will probably burn out their motherboards mining.
According to few sources the supply is still notably higher than GTX 1080 was for example, and I don't see why people are suddenly so worried about motherboards, there's for example GTX 950s without power connector which draw pretty much exactly 75W while gaming on stock, and have no issues OC'ing and thus taking more than 75W from PCIe slot
 
According to few sources the supply is still notably higher than GTX 1080 was for example, and I don't see why people are suddenly so worried about motherboards, there's for example GTX 950s without power connector which draw pretty much exactly 75W while gaming on stock, and have no issues OC'ing and thus taking more than 75W from PCIe slot
People buy like 6 cards to mine and the mobo has to supply like 80W to each gpu. Pulling near 500W through the pcie means 500Ws from a 4 pin or 6 pin on the mobo. Thats a lot of power through the mobo circuitry.
 
cVxI3ek.jpg

So at the risk of asking a dumb question... is this supposed to be a die shot?
 
I was surprised at the negativity on this thread to be honest. It was always supposed to be a midrange part and the cost performance means at the moment ( before we know what 1060 will be like) it is very competitive so it is no wonder it is selling well.

Perhaps the hopes and expectations on the bang per watt meant a more negative tone was done?

Does anyone in the desktop sector actually worry about how much it is costing to run? True, temps and sound levels do not sound that good for the reference board, AIB's will fix that though. Overclocking and power draw could be a slight concern though.

I certainly want AMD still around, nvidia cards are not cheap even now.
 
Yeah, I didn't realize the transistor count between Polaris 10 and GP104 was as close as it is. I know potato pototo, but the GTX1080 achieving the performance it does with +26% transistors is impressive...
GTX1080 has much higher clocks. But it's a good question how Nvidia managed to boost up their clocks that high, without a big increase in power usage. 1200 MHz is already a bit too high for AMD. They said themselves that 470 will show the true power efficiency of Polaris, and will be running at lower clocks.

Personally I think that AMD has a good strategy in clocking their mid tier (smaller die) GPU a bit above the sweet spot. People mostly buy these GPUs because they don't have money for the best. Bringing mid tier power consumption closer to high end doesn't matter that much, but the small performance boost is always welcome. 480X is not a best example of AMDs perf/watt. We have to wait for 470 and 490.
 
Last edited:
The comparative is interesting but won't be a real case scenario. No1 will play at 1080p on a 1080 and no1 would(not saying that won't) play higher than 1080p on a 480.

The key of the succeed of the customs 480 would be the price. One 480 of 4GBs at 1400 for 240 dollars would be a beast. for 300 would be a waste.

That might be true. But still many gamers have just 1080p monitors, it's good to know that if that's what you have an AIB OC 480 might get near gtx 1080 performance at that rez in some dx12 games. So unless you're supersampling or upgrading monitors, rx480 might virtually be the best buy irregardless of your budget.

It also has the benefit of freesync support.

Also interesting in the console market, where many dev.s with similar amd specc'ed consoles like scorpio and neo might decide to target 1080p, as most tvs are 1080p. If rx480 gets within 30% of gtx1080 at base specs in 1080p in some games, it's not unreasonable to suspect similar may be the case for console optimized titles(against hypothetical gtx1080 consoles).
 
The problems with the power draw of the reference card are just stupid. Why would you use one 6Pin if you need 150W+?? AMD is really the best when it comes to making their own products looking worse than they are.
 
The problems with the power draw of the reference card are just stupid. Why would you use one 6Pin if you need 150W+?? AMD is really the best when it comes to making their own products looking worse than they are.

If I didn't mishear, it seems they're claiming they've passed internal and external pci express certification testing, and basically claiming this is not true, and the cards tested might be defective or something.

We'll see in the days ahead.


One of the guys at sapphire, again if I didn't mishear, said they've got an 8pin in their card but that he'd heard it didn't truly need an 8 pin.
 
If I didn't mishear, it seems they're claiming they've passed internal and external pci express certification testing, and basically claiming this is not true, and the cards tested might be defective or something.

We'll see in the days ahead.


One of the guys at sapphire, again if I didn't mishear, said they've got an 8pin in their card but that he'd heard it didn't truly need an 8 pin.

Well sending out a big batch of defective cards for reviews does not make things better imho. I personally think that they did the board design believing to need less voltage to reach the desired speed and the chips turned out to need more. This would als explain that some AiBs are talking about very different qualities of GPUs.
 
People buy like 6 cards to mine and the mobo has to supply like 80W to each gpu. Pulling near 500W through the pcie means 500Ws from a 4 pin or 6 pin on the mobo. Thats a lot of power through the mobo circuitry.
That is not how this works. This is not how any of this works.

First of all, there is no power going through any active or passive components on the motherboard. That's just plain traces, so unless the board is significantly undersized and catches fire, almost nothing to worry about.

Second, current follows the path of least resistance. If the card draws a majority of the power from the motherboard, it's because the resistance is marginally lower on that path, and AMD isn't actively balancing the power draw.
With more cards added, and even the slightest voltage drop on the motherboard - and be it only a few dozen millivolts, the draw will invert. Heck, in extreme situations, such as an Etherium mining rig, you probably won't even see more than 20-30W per card being drawn from the slot, and the rest being drawn from the 6-pin connector (which itself can easily take that power draw!).

What you *should* be worried about is spikes in power draw due to insufficient capacitor capacity on the inside, disrupting the system stability. But I don't see that happen, given that the VR setup of the 480 is pleasantly oversized.
 
I'm curious if it's specifically a console customer that guided some of these changes.
The prioritization and CU reservation in particular addresses concerns brought up in the context of the PS4's development..
Async shaders are still mostly an unused feature on PC. People are experimenting with it. Console developers have used async shaders for a long time already and gained significant understanding about it. Async shader perf gains depends heavily on the developers ability to pair correct shaders together (different bottlenecks) and tuning the scheduling + CU/wave reservation parameters (to prevent starving and other issues). Manual tuning for each HW configuration is not going to be acceptable on PC development. GPU should dynamically react to scheduling bottlenecks. GCN4 saves lots of developer work and likely results in slightly better performance than statically hand tuned async shader code.

Register pressure is another bottleneck of GCN architecture. It's been discussed in many presentation since the current console gen launch. Fp16/int16 are great ways to reduce this bottleneck. GCN3 already introduced fp16/int16, but only for APUs. AMD marking slides state that GCN4 adds fp16/int16 for discrete GPUs (http://images.anandtech.com/doci/10446/P3.png?_ga=1.18704828.484432542.1449038245). This means that fp16/int16 is now a main feature on all GCN products. Nvidia is only offering fp16 on mobile and professional products. Gaming cards (GTX1070/1080) don't support it.

Console devs have also lately started to discuss methods to sidestep GCNs weak geometry pipeline. My SIGGRAPH presentation and Graham's GDC presentation for example give some good points. Graham's GDC presentation is a must read: http://www.frostbite.com/2016/03/optimizing-the-graphics-pipeline-with-compute/ . On GCN2 it is actually profitable to software emulate the GCN4 "primitive culling accelerator". This obviously costs a lot of shader cycles, but still results in win. I am glad to see that GCN4 handles most of this busy work more efficiently with fixed function hardware.
 
Register pressure is another bottleneck of GCN architecture. It's been discussed in many presentation since the current console gen launch. Fp16/int16 are great ways to reduce this bottleneck. GCN3 already introduced fp16/int16, but only for APUs. AMD marking slides state that GCN4 adds fp16/int16 for discrete GPUs (http://images.anandtech.com/doci/10446/P3.png?_ga=1.18704828.484432542.1449038245). This means that fp16/int16 is now a main feature on all GCN products. Nvidia is only offering fp16 on mobile and professional products. Gaming cards (GTX1070/1080) don't support it.
For what it's worth, when Tonga was released I was explicitly told that it supported FP16. So if that's not the case, that's a change in what AMD is saying.
 
That is not how this works. This is not how any of this works.

First of all, there is no power going through any active or passive components on the motherboard. That's just plain traces, so unless the board is significantly undersized and catches fire, almost nothing to worry about.

Second, current follows the path of least resistance. If the card draws a majority of the power from the motherboard, it's because the resistance is marginally lower on that path, and AMD isn't actively balancing the power draw.
With more cards added, and even the slightest voltage drop on the motherboard - and be it only a few dozen millivolts, the draw will invert. Heck, in extreme situations, such as an Etherium mining rig, you probably won't even see more than 20-30W per card being drawn from the slot, and the rest being drawn from the 6-pin connector (which itself can easily take that power draw!).

What you *should* be worried about is spikes in power draw due to insufficient capacitor capacity on the inside, disrupting the system stability. But I don't see that happen, given that the VR setup of the 480 is pleasantly oversized.
Thanks for your comment. Now we can understand better how this works. Still is very bad from the marketing perspective.

Enviado desde mi HTC One mediante Tapatalk
 
AMD Radeon RX 480 4GB Video Cards At-Launch Have 8GB of Memory


Want in on a little secret? AMD and their board partners had some problems sourcing enough 8Gbps GDDR5 memory for the Radeon RX 480 launch today. That caused AMD to lower the clock speeds at the very last minute, so now the Radeon RX 480 will be using at least 7Gbps GDDR5 memory and we have learned that ultimately it is up to the board partners to pick what they want to use. Since there was not enough parts to build the Radeon RX 480 4GB cards for the launch today all the at-launch cards are shipping with 8GB of 8Gbps GDDR5 memory.

fine-print.jpg


They didn’t tell any reviewers this fact unless they directly asked about it, so most reviews today missed that fact. We learned about it when AMD informed us that they wouldn’t be sending out any Radeon RX 480 4GB cards and instead would be using a BIOS to limit the Radeon RX 480 8GB cards to 4GB of memory. In theory if you can find a Radeon RX 480 4GB at-launch reference board you should in theory be able to flash it to a card with 8GB of memory!

We asked AMD if they would allow us to host the BIOS to allow the 4GB cards to be flashed to an 8GB card, but they said absolutely not. That said, we are sure that the files will make their way through back channels. Just a heads up as all the cards you can buy on store shelves right now have 8GB of memory on them and you can save yourself $40 and get 8Gbps memory!
http://www.legitreviews.com/amd-radeon-rx-480-4gb-video-cards-8gb-memory_183548
 
Actually, AMD explained it to me like this:
The basis is an actual die shot. It is enhanced with an image of the floor plan and then processed to highlight certain areas the team considers of interest.
The outer perimeter is probably the only die shot part in it. And they probably used a weird mix of a floor plan and some kind of a schematic block diagram (as nV usually does as well in these fake die shots) "to highlight certain things". A floor plan shows the actual layout of the stuff on the die but I wouldn't believe that the SPs look like these small squares and a vector unit like a 4x4 assembly of such squares without any registers nearby. The layout of the SIMD units appear to be fairly different judging from a comparison with actual die shots of GCN-GPUs.

Edit:
But if we take it semi-seriously, I would think it appears to show a L1-sD$ and L1-I$ is shared now by only 2 CUs and not (up to) 4 anymore. But the official block diagram shows groups of 3 CUs each.
 
Last edited:
Status
Not open for further replies.
Back
Top