AMD Lists The Radeon RX 490 Flagship – Polaris based Dual GPU Graphics Card For 4K Ready Gaming
.
http://wccftech.com/amd-rx-490-dual-gpu/
It should be cheaper than a 1080. Isn't a 1080 $700 right now ? I bet the 490 comes in around $500/$550
AMD Lists The Radeon RX 490 Flagship – Polaris based Dual GPU Graphics Card For 4K Ready Gaming
.
http://wccftech.com/amd-rx-490-dual-gpu/
Strange, they picked it up now? It was there the same week Polaris 10 / RX 480 launched already, and Sapphire has it on their support site too (what's curious is the fact that Sapphire says it has 8GB GDDR5, which would suggest 2x4GB if it's dual Polaris 10)AMD Lists The Radeon RX 490 Flagship – Polaris based Dual GPU Graphics Card For 4K Ready Gaming
.
http://wccftech.com/amd-rx-490-dual-gpu/
Good points. Also here is a post from a current AMD customer that I agree with:Well the "compatibility mode" should be the default and they are very close to the specs, still out of spec but close, so AMD should drop that down just a bit and they would be set, let the user do their thing with overclocking and removing the capability mode, its under their volition so it will be all good from AMD's end.
Also David, I think if any of the reviewers tried other cards, if there was a margin of error from the connecting device that will be see with them too, so I don't think its fair to say the motherboard is what is causing the problem with the new drivers.
Noisiv;5303099 said:People's equipment IS sometimes that bad. That's the whole point of standards.
To have a system functioning properly even if some component(s) are overshooting ideal specs by few %.
Because of the wear and tear, QA, sample variation, because of your cat's hairs, or for whatever other reason.
Yet when AMD entirely misses those standards BY A DESIGN, we have to walk on tiptoes
in order not to hurt the company which sells their products, or perhaps not to hurt someone's feelings.
What has happened here is AMD has been cornered by Nvidia's superior perf/W. So AMD is forced to push the power envelope, or suffer performance.
Or... tell standards **** all. And that's what they're doing.
AMD is cashing in on other PC components and their respective manufacturers who are in compliance with standards.
Hoping that most PCs are over-engineered. Such a ridiculous term to be used for a company that does not comply with basic PCI-E standards.
If only few other manufacturers of PC components were as standard-compliant as AMD, you wouldn't be able to turn your PC on.
And if everyone was as compliant as AMD, we'd have entire households burning at the very least. No that's not a joke.
Finally, if you want your RX 480 barely within specs, compatibility mode has to be activated. and you will lose few performance %.
This is not an out of the box setting. Bare that in mind when looking at the benchmarks - all these RX 480 are out of PCI-E specs.
Where would this product leave Vega 10 unless that GPU isn't out for quite a while, even assuming a dual-GPU 490 launches pretty much immediately?
Good points. Also here is a post from a current AMD customer that I agree with:
The Sapphire listing was likely a typo. It's the same model number as the 8GB 480 and there was no 8GB 480 listed either.Strange, they picked it up now? It was there the same week Polaris 10 / RX 480 launched already, and Sapphire has it on their support site too (what's curious is the fact that Sapphire says it has 8GB GDDR5, which would suggest 2x4GB if it's dual Polaris 10)
If the artistic renditions of the die are at least somewhat close to reality, I am not sure where they'd stick a 100 GB/s interface (bidirectional?).As for that alleged 490, I still have to wonder if they're getting an interconnect working on the cards. Being able to transfer around 100GB/s would alleviate a lot of the traditional multi-adapter issues.
That would require a change not yet mentioned for the L2. The caches are local to the memory channel they are tied to, and the current method for how they maintain coherence within the GPU does not carry over to another GPU, and their coherence with the CPU space usually involves some kind of bypass--meaning 0 efficiency.If cache efficiency was good, that single link would remove a lot of the complications with adapters being able to address each other.
I am not sure they have the capacity or visibility to make that analysis, or the ability to communicate with their counterparts on the other GPU.So in theory HWS/ACE could schedule dependencies to a single card, alternate per frame, etc.
Oh don't worry. They could just have an external AC/DC adapter.Well a dual GPU would consume near 300W...
If they put a water cooler on it, cooler-running GPUs would draw less power, pushing the envelope down a bit.Well a dual GPU would consume near 300W...
Seen from the cost (or margin) side, maybe some higher markup would be justifiable. But you don't really get anything at all presumably over 2 separate RX480 so I don't know how high the markup could be if they intend to actually sell it. But maybe that's just me (I don't really see any point at all in dual gpu cards in the first place)...As little as a $20 premium over 2 8GB RX 480 for a complex, low-volume card? Doubtful.
It might not need to. At a very basic level it could be as simple as addressing memory on the other card and removing transfer penalties with higher bandwidth. They also doubled cache sizes which would work to alleviate some of those transfers. Textures for example wouldn't necessarily need to be duplicated for each device. Similar to how VR would use SFR to render a scene. High level scheduling could be duplicated for each device with the unnecessary half discarded. That primitive discard should be efficient at removing the half of the screen that wasn't necessary. ROPs and color compression should stay local, although shared resources may need a decompression pass. Another possibility would be to overlap the halves slightly. Texture decompression should work fine with that system. That should be simple enough for a driver to detect and perform. Would it be perfect? No. But it would have effectively doubled ROPs, VRAM, and compute which Polaris 10 would require for 4k gaming.That would require a change not yet mentioned for the L2. The caches are local to the memory channel they are tied to, and the current method for how they maintain coherence within the GPU does not carry over to another GPU, and their coherence with the CPU space usually involves some kind of bypass--meaning 0 efficiency.
It might not even need to be 100GB/s. Going off some of the Zen documentation that has leaked, the capability should somehow be in the existing memory controller. Say 192bit memory bus with 64bit bidirectional? Maybe the current controller is slightly larger than 256bit? The architecture should support it for interfacing with Zen, we just don't know what it looks like. What I've seen all suggests CPU communication going through the GPU memory controller. Vice versa for GPU accessing system memory. Having 10 links, as opposed to 8 all for memory, would leave 2 free to connect to the other GPU. In the case of a 490 it could be hardwired on the board and still be far more effective than utilizing the PCIE bus.If the artistic renditions of the die are at least somewhat close to reality, I am not sure where they'd stick a 100 GB/s interface (bidirectional?).
What's so complex about it? Is a 1080 also low volume as that would seemingly be the competition for a part like that? Although a 1070 might be more reasonable considering the prices. The board already has the power circuitry to run it, so those costs won't be doubling either. Just make a slightly larger PCB and sell two GPUs and some additional memory chips.As little as a $20 premium over 2 8GB RX 480 for a complex, low-volume card? Doubtful.
If they put a water cooler on it, cooler-running GPUs would draw less power, pushing the envelope down a bit.
One can always hope! Of course, a dual-GPU card would be very inefficient, especially with AMD's subpar driver support. If all games magically could support DX12 multi-adapter, then yeah, it would be cool, but most likely anything running windowed mode can't even support crossfire to begin with (and game I play the most, WoW, I run windowed), so I think I for one am pretty screwed here...
It's gonna have to be Vega or bust, I think, because a GF 1080 Ti or whatever they'll call it is going to cost *Dr. Evil-pinkie-touching-mouth* one million dollars!
Here is what was explicitly stated on one of the shipping manifests: the Baffin XT GPU (aka Polaris 10) was the C98 variant.
In fact both C94 and C98 boards represent iterations of the RX 480 (namely 4GB and 8GB).