AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
I'll put my lack of enthusiasm for multi-GPU aside:
The RX480 in Crossfire can slot anywhere from RX480-level performance, ~1070, between 1070-1080, and in some places a little better than the 1080.
The 1070, particularly if the non-FE or factory-OC versions come into play, are going to bring a consistent level of performance, much better power numbers, and somewhat sane pricing depending on the time frame for this product. Whether a dual-GPU board is going to do much in terms of relative availability seems unlikely.

Where would this product leave Vega 10 unless that GPU isn't out for quite a while, even assuming a dual-GPU 490 launches pretty much immediately?
 
Well the "compatibility mode" should be the default and they are very close to the specs, still out of spec but close, so AMD should drop that down just a bit and they would be set, let the user do their thing with overclocking and removing the capability mode, its under their volition so it will be all good from AMD's end.

Also David, I think if any of the reviewers tried other cards, if there was a margin of error from the connecting device that will be see with them too, so I don't think its fair to say the motherboard is what is causing the problem with the new drivers.
Good points. Also here is a post from a current AMD customer that I agree with:

Noisiv;5303099 said:
People's equipment IS sometimes that bad. That's the whole point of standards.
To have a system functioning properly even if some component(s) are overshooting ideal specs by few %.
Because of the wear and tear, QA, sample variation, because of your cat's hairs, or for whatever other reason.

Yet when AMD entirely misses those standards BY A DESIGN, we have to walk on tiptoes
in order not to hurt the company which sells their products, or perhaps not to hurt someone's feelings.

What has happened here is AMD has been cornered by Nvidia's superior perf/W. So AMD is forced to push the power envelope, or suffer performance.
Or... tell standards **** all. And that's what they're doing.

AMD is cashing in on other PC components and their respective manufacturers who are in compliance with standards.
Hoping that most PCs are over-engineered. Such a ridiculous term to be used for a company that does not comply with basic PCI-E standards.
If only few other manufacturers of PC components were as standard-compliant as AMD, you wouldn't be able to turn your PC on.
And if everyone was as compliant as AMD, we'd have entire households burning at the very least. No that's not a joke.

Finally, if you want your RX 480 barely within specs, compatibility mode has to be activated. and you will lose few performance %.
This is not an out of the box setting. Bare that in mind when looking at the benchmarks - all these RX 480 are out of PCI-E specs.
 
Good points. Also here is a post from a current AMD customer that I agree with:


Yeah this was the problem in the 80's to early 90's when PC first started coming to general consumers THERE were NO standards (software too was pretty bad), and it made it a pain in the ass to put components together and having them work properly off the shelf unless ya really knew what you were doing, of course there were less components to choose from and things weren't as power hunger so things were kinda evened out that way
 
Strange, they picked it up now? It was there the same week Polaris 10 / RX 480 launched already, and Sapphire has it on their support site too (what's curious is the fact that Sapphire says it has 8GB GDDR5, which would suggest 2x4GB if it's dual Polaris 10)
The Sapphire listing was likely a typo. It's the same model number as the 8GB 480 and there was no 8GB 480 listed either.
 
Here's one way to look at the whole power issue. For marketing or OEM reasons they needed a card with a single 6 pin connector. That part was created but realistically they needed to back off the power a bit for that specific part. It's extremely unlikely any of the AIB cards will even have this supposed issue as they will have more connectors. I really don't see how this is any different than a mobile variant with lower speeds for hitting a thermal envelope. It's not like this is an issue across the entire line. And yes I realize most of the people pushing this are paid to do so.

As for that alleged 490, I still have to wonder if they're getting an interconnect working on the cards. Being able to transfer around 100GB/s would alleviate a lot of the traditional multi-adapter issues. It still may be cheaper than a single larger die as well because of yields. If cache efficiency was good, that single link would remove a lot of the complications with adapters being able to address each other. They also had a feature listing the ability to reserve CUs for a specific task. So in theory HWS/ACE could schedule dependencies to a single card, alternate per frame, etc. The current reference 480 has enough power handling for a dual card, and simply backing off the clocks 10% would provide a disproportionate power cut. That still leaves Vega to do something similar but with MCM and HBM at a Fury tier. Seeing a dual card in the $350-450 seems reasonable and Vega with the expensive HBM2 could still sit at the $500+ mark and compete with a 1080.
 
As for that alleged 490, I still have to wonder if they're getting an interconnect working on the cards. Being able to transfer around 100GB/s would alleviate a lot of the traditional multi-adapter issues.
If the artistic renditions of the die are at least somewhat close to reality, I am not sure where they'd stick a 100 GB/s interface (bidirectional?).

If cache efficiency was good, that single link would remove a lot of the complications with adapters being able to address each other.
That would require a change not yet mentioned for the L2. The caches are local to the memory channel they are tied to, and the current method for how they maintain coherence within the GPU does not carry over to another GPU, and their coherence with the CPU space usually involves some kind of bypass--meaning 0 efficiency.

So in theory HWS/ACE could schedule dependencies to a single card, alternate per frame, etc.
I am not sure they have the capacity or visibility to make that analysis, or the ability to communicate with their counterparts on the other GPU.
Another question mark I'd have is the non-coherent color compression pipeline, if a CU on another GPU tried to read compressed data. Intra-frame modification is a corruption threat within a GPU without the correct barriers, and juggling multiple compression caches that are by design incoherent with their own local resources may render those barriers inadequate or even more costly.
 
Well a dual GPU would consume near 300W...
If they put a water cooler on it, cooler-running GPUs would draw less power, pushing the envelope down a bit.

One can always hope! Of course, a dual-GPU card would be very inefficient, especially with AMD's subpar driver support. If all games magically could support DX12 multi-adapter, then yeah, it would be cool, but most likely anything running windowed mode can't even support crossfire to begin with (and game I play the most, WoW, I run windowed), so I think I for one am pretty screwed here... :p

It's gonna have to be Vega or bust, I think, because a GF 1080 Ti or whatever they'll call it is going to cost *Dr. Evil-pinkie-touching-mouth* one million dollars!
 
I thought dual-GPU technology had improved a good deal? But I'm still not fond of it so I have't paid very close attention.

They way I would build this card is with GPUs binned for low-leakage and a ~250W target. Clock speeds would have to go down a good deal, but power-efficiency would go up, especially with a good cooler.
 
As little as a $20 premium over 2 8GB RX 480 for a complex, low-volume card? Doubtful.
Seen from the cost (or margin) side, maybe some higher markup would be justifiable. But you don't really get anything at all presumably over 2 separate RX480 so I don't know how high the markup could be if they intend to actually sell it. But maybe that's just me (I don't really see any point at all in dual gpu cards in the first place)...
That said, I suppose AMD could make it more efficient, maybe hit a power target of 225W or so, by reducing TDP, skipping some of the super inefficient highest P states. Albeit that would not really help neither for justifying higher cost, cause then it would supposedly just be more like 2xRX470... Something like "fastest card" doesn't really have any value over 2 single cards if it operates exactly the same...
 
That would require a change not yet mentioned for the L2. The caches are local to the memory channel they are tied to, and the current method for how they maintain coherence within the GPU does not carry over to another GPU, and their coherence with the CPU space usually involves some kind of bypass--meaning 0 efficiency.
It might not need to. At a very basic level it could be as simple as addressing memory on the other card and removing transfer penalties with higher bandwidth. They also doubled cache sizes which would work to alleviate some of those transfers. Textures for example wouldn't necessarily need to be duplicated for each device. Similar to how VR would use SFR to render a scene. High level scheduling could be duplicated for each device with the unnecessary half discarded. That primitive discard should be efficient at removing the half of the screen that wasn't necessary. ROPs and color compression should stay local, although shared resources may need a decompression pass. Another possibility would be to overlap the halves slightly. Texture decompression should work fine with that system. That should be simple enough for a driver to detect and perform. Would it be perfect? No. But it would have effectively doubled ROPs, VRAM, and compute which Polaris 10 would require for 4k gaming.

If the artistic renditions of the die are at least somewhat close to reality, I am not sure where they'd stick a 100 GB/s interface (bidirectional?).
It might not even need to be 100GB/s. Going off some of the Zen documentation that has leaked, the capability should somehow be in the existing memory controller. Say 192bit memory bus with 64bit bidirectional? Maybe the current controller is slightly larger than 256bit? The architecture should support it for interfacing with Zen, we just don't know what it looks like. What I've seen all suggests CPU communication going through the GPU memory controller. Vice versa for GPU accessing system memory. Having 10 links, as opposed to 8 all for memory, would leave 2 free to connect to the other GPU. In the case of a 490 it could be hardwired on the board and still be far more effective than utilizing the PCIE bus.

As little as a $20 premium over 2 8GB RX 480 for a complex, low-volume card? Doubtful.
What's so complex about it? Is a 1080 also low volume as that would seemingly be the competition for a part like that? Although a 1070 might be more reasonable considering the prices. The board already has the power circuitry to run it, so those costs won't be doubling either. Just make a slightly larger PCB and sell two GPUs and some additional memory chips.
 
If they put a water cooler on it, cooler-running GPUs would draw less power, pushing the envelope down a bit.

One can always hope! Of course, a dual-GPU card would be very inefficient, especially with AMD's subpar driver support. If all games magically could support DX12 multi-adapter, then yeah, it would be cool, but most likely anything running windowed mode can't even support crossfire to begin with (and game I play the most, WoW, I run windowed), so I think I for one am pretty screwed here... :p

It's gonna have to be Vega or bust, I think, because a GF 1080 Ti or whatever they'll call it is going to cost *Dr. Evil-pinkie-touching-mouth* one million dollars!

Actually, the subpar "AMD drivers" ( as intended by consumer ) is way over over Nvidia one ...... Seriously, since one year, Nvidia drivers support is just a full nightmare ( game ready drivers who are worst than the previous "non game ready drivers", full of crash and problem ( fans, power limit etc ) whatever you still own a 780 or a Maxwelll series.. ( i cant tel yet for the 10x0 series ).. And if it was better in the professional sides lol, But no, when AMD FIrepro are over the top since 7-8 months ( copping at 99% stability ( never seen before ), Nvidia have take a dive.. ... I seriously ask me if Nvidia have hire old AMD drivers teams...

If you play WOW, on an Nvidia gpu's i hope you are not waiting for a fix on the vertical sync / frame limiters who dont make your gpu's heat as mad before it render 2800fps when you open your characters sheet or any "system window"... And i dont remember that SLI is working so well in windowed mode lol . In fact, i dont remember it working so well in WOW ( who is pretty cpu limited in all cases, you could certainly have better fps gain with a better cpu and a good amount of DDR )
 
Last edited:
The wccftech article is a joke. This howler,

Here is what was explicitly stated on one of the shipping manifests: the Baffin XT GPU (aka Polaris 10) was the C98 variant.

And he didn't mean Polaris 11 there, since in the very next line he says,

In fact both C94 and C98 boards represent iterations of the RX 480 (namely 4GB and 8GB).

Finally he links C99x board to dual Polaris 10 despite the fact that the second digit identifies the chip used.

Zauba shipping manifests for Hawaii were C676, C675 and C673 for xt, pro and x2 respectively and more recently,

C882 - Fiji nano
C888 - Fiji x2 (Gemini)
C880 - Fiji XT

So C98x and C99x are definitely not the same chip, C99x could be a polaris 10 board if C99x was for Polaris 10 chips but that isn't clear either. He could've linked the new prices on zauba for the cards,

cVN5EAZ.png
 
Status
Not open for further replies.
Back
Top