According to few sources the supply is still notably higher than GTX 1080 was for example, and I don't see why people are suddenly so worried about motherboards, there's for example GTX 950s without power connector which draw pretty much exactly 75W while gaming on stock, and have no issues OC'ing and thus taking more than 75W from PCIe slotLow stock and high demand. Ethereum Miners probably bought a crapload and they will probably burn out their motherboards mining.
People buy like 6 cards to mine and the mobo has to supply like 80W to each gpu. Pulling near 500W through the pcie means 500Ws from a 4 pin or 6 pin on the mobo. Thats a lot of power through the mobo circuitry.According to few sources the supply is still notably higher than GTX 1080 was for example, and I don't see why people are suddenly so worried about motherboards, there's for example GTX 950s without power connector which draw pretty much exactly 75W while gaming on stock, and have no issues OC'ing and thus taking more than 75W from PCIe slot
GTX1080 has much higher clocks. But it's a good question how Nvidia managed to boost up their clocks that high, without a big increase in power usage. 1200 MHz is already a bit too high for AMD. They said themselves that 470 will show the true power efficiency of Polaris, and will be running at lower clocks.Yeah, I didn't realize the transistor count between Polaris 10 and GP104 was as close as it is. I know potato pototo, but the GTX1080 achieving the performance it does with +26% transistors is impressive...
The comparative is interesting but won't be a real case scenario. No1 will play at 1080p on a 1080 and no1 would(not saying that won't) play higher than 1080p on a 480.
The key of the succeed of the customs 480 would be the price. One 480 of 4GBs at 1400 for 240 dollars would be a beast. for 300 would be a waste.
No. It's not real and clearly shopped.So at the risk of asking a dumb question... is this supposed to be a die shot?
The problems with the power draw of the reference card are just stupid. Why would you use one 6Pin if you need 150W+?? AMD is really the best when it comes to making their own products looking worse than they are.
No, it's supposed to be artistic rendering on the dieSo at the risk of asking a dumb question... is this supposed to be a die shot?
If I didn't mishear, it seems they're claiming they've passed internal and external pci express certification testing, and basically claiming this is not true, and the cards tested might be defective or something.
We'll see in the days ahead.
One of the guys at sapphire, again if I didn't mishear, said they've got an 8pin in their card but that he'd heard it didn't truly need an 8 pin.
That is not how this works. This is not how any of this works.People buy like 6 cards to mine and the mobo has to supply like 80W to each gpu. Pulling near 500W through the pcie means 500Ws from a 4 pin or 6 pin on the mobo. Thats a lot of power through the mobo circuitry.
Async shaders are still mostly an unused feature on PC. People are experimenting with it. Console developers have used async shaders for a long time already and gained significant understanding about it. Async shader perf gains depends heavily on the developers ability to pair correct shaders together (different bottlenecks) and tuning the scheduling + CU/wave reservation parameters (to prevent starving and other issues). Manual tuning for each HW configuration is not going to be acceptable on PC development. GPU should dynamically react to scheduling bottlenecks. GCN4 saves lots of developer work and likely results in slightly better performance than statically hand tuned async shader code.I'm curious if it's specifically a console customer that guided some of these changes.
The prioritization and CU reservation in particular addresses concerns brought up in the context of the PS4's development..
Actually, AMD explained it to me like this:So much so that I think AMD might have hired Nvidia's die shopper to make theirs too.
For what it's worth, when Tonga was released I was explicitly told that it supported FP16. So if that's not the case, that's a change in what AMD is saying.Register pressure is another bottleneck of GCN architecture. It's been discussed in many presentation since the current console gen launch. Fp16/int16 are great ways to reduce this bottleneck. GCN3 already introduced fp16/int16, but only for APUs. AMD marking slides state that GCN4 adds fp16/int16 for discrete GPUs (http://images.anandtech.com/doci/10446/P3.png?_ga=1.18704828.484432542.1449038245). This means that fp16/int16 is now a main feature on all GCN products. Nvidia is only offering fp16 on mobile and professional products. Gaming cards (GTX1070/1080) don't support it.
Thanks for your comment. Now we can understand better how this works. Still is very bad from the marketing perspective.That is not how this works. This is not how any of this works.
First of all, there is no power going through any active or passive components on the motherboard. That's just plain traces, so unless the board is significantly undersized and catches fire, almost nothing to worry about.
Second, current follows the path of least resistance. If the card draws a majority of the power from the motherboard, it's because the resistance is marginally lower on that path, and AMD isn't actively balancing the power draw.
With more cards added, and even the slightest voltage drop on the motherboard - and be it only a few dozen millivolts, the draw will invert. Heck, in extreme situations, such as an Etherium mining rig, you probably won't even see more than 20-30W per card being drawn from the slot, and the rest being drawn from the 6-pin connector (which itself can easily take that power draw!).
What you *should* be worried about is spikes in power draw due to insufficient capacitor capacity on the inside, disrupting the system stability. But I don't see that happen, given that the VR setup of the 480 is pleasantly oversized.
http://www.legitreviews.com/amd-radeon-rx-480-4gb-video-cards-8gb-memory_183548Want in on a little secret? AMD and their board partners had some problems sourcing enough 8Gbps GDDR5 memory for the Radeon RX 480 launch today. That caused AMD to lower the clock speeds at the very last minute, so now the Radeon RX 480 will be using at least 7Gbps GDDR5 memory and we have learned that ultimately it is up to the board partners to pick what they want to use. Since there was not enough parts to build the Radeon RX 480 4GB cards for the launch today all the at-launch cards are shipping with 8GB of 8Gbps GDDR5 memory.
They didn’t tell any reviewers this fact unless they directly asked about it, so most reviews today missed that fact. We learned about it when AMD informed us that they wouldn’t be sending out any Radeon RX 480 4GB cards and instead would be using a BIOS to limit the Radeon RX 480 8GB cards to 4GB of memory. In theory if you can find a Radeon RX 480 4GB at-launch reference board you should in theory be able to flash it to a card with 8GB of memory!
We asked AMD if they would allow us to host the BIOS to allow the 4GB cards to be flashed to an 8GB card, but they said absolutely not. That said, we are sure that the files will make their way through back channels. Just a heads up as all the cards you can buy on store shelves right now have 8GB of memory on them and you can save yourself $40 and get 8Gbps memory!
The outer perimeter is probably the only die shot part in it. And they probably used a weird mix of a floor plan and some kind of a schematic block diagram (as nV usually does as well in these fake die shots) "to highlight certain things". A floor plan shows the actual layout of the stuff on the die but I wouldn't believe that the SPs look like these small squares and a vector unit like a 4x4 assembly of such squares without any registers nearby. The layout of the SIMD units appear to be fairly different judging from a comparison with actual die shots of GCN-GPUs.Actually, AMD explained it to me like this:
The basis is an actual die shot. It is enhanced with an image of the floor plan and then processed to highlight certain areas the team considers of interest.