AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

It depends on a specific interaction between what pixels the GPU chooses to shade and a count of the number of pixels shaded so far.
If it weren't for that, the final outcome as expected by the API is that the very last triangle is the only thing visible.
It's not clear why Vega's method, which could skip the trick they are using, should behave the same. Technically, the final result of only showing the last triangle is equivalent.

Twitchy graphical bugs due to timing issues not fully isolating interacting primitives are not unheard of, and such glitches usually get bug-fixed by developers.
 
I never said it is proof. I always wrote about what appears to be the case. And I explicitly hedged my bet and offered two possibilities:

So either that tool can't catch it properly (but it can in case of nV's GPUs) or AMD still needs to switch it on in a later driver.
;)
__________________________

That rather sounds like a six of one, half a dozen of the other scenario.
 
Difference Between the rasterizer. Vega should look like the middle on from Realworldtech. But if you look the stream from PCPer it Looks exectly like Fury X. Look at this Post:
https://forum.beyond3d.com/posts/1989684/

rasterizer2.png

http://www.realworldtech.com/tile-based-rasterization-nvidia-gpus/
 
Last edited:
Difference Between the rasterizer. Vega should look like the middle on.
Even if Vega's draw stream rasterizer is fully on, it is not the same method as Maxwell/Pascal. Why should it look the same?

But if you look the stream from PCPer it Looks exectly like Fury X. Look at this Post:
The method is free to change modes based on its own metrics and driver settings. Even if it were on, it can in certain cases still choose to use a more traditional rasterization method--that is the "dynamic" part of the design. It is up to the driver and GPU. The API does not care.
 
I understand your concerns. But this is a realy simply test, and in this test i think there is something written, if TBR is choosable we choos this. Also there should be something in the Code that it should not use fallback solution. You get my Point?

And also its a simple test with no difficulties for the TBR. So why it coose intermediate Rasterizer which Looks exectly like Fiji? So when such a simple test is switching to intermediate render i dont know what happens when software have more skills.

Even if Vega's draw stream rasterizer is fully on, it is not the same method as Maxwell/Pascal. Why should it look the same?
And of cause maybe it looks diffrent in future. But it will Looks more like Nvidias solution than exactly like Fiji?!
 
Last edited:
Everyone is saying how awful Polaris is at clock speeds. Mine is on air, and has been sitting at 1435 for the year and change that I have had it, and I'm not overvolting it much more than 2%.

It's mining now, too, since I decided if everyone wanted one so bad, I'd find out why and benefit from it myself. To me, it is fantastic. Though I must admit it replaced two GTX460's in SLI ...

I've pushed it up to 1460, but that isn't sustainable for stability. RX 480. Nothing to write home about. I got it before the mining boom, and it satisfied gaming and workloads for me. I realize that may not be the case for everyone, but I rather still like it, and it does a damn sight better at math than had I caved and bought a GTX 1060 at the time.

"Where did I go wrong, *wrings hands*, *by buying this horrible GPU that suits my needs, Rhett?, I'll tell you now, for once Rhett, *I* was right to not give a damn!" Normal Southern accent used.
 
Last edited:
I understand your concerns. But this is a realy simply test, and in this test i think there is something written, if TBR is choosable we choos this.
That is not what this test is doing. There is no choose TBR command. The test is exploiting a corner case between an unordered access view counter and a pixel kill based on the counter.

Using UAVs means having to be careful about getting unexpected behavior already. The API's guideline if things look weird is "it looks like you messed up, fix it yourself".
In this case, the test purposefully messes things up in a way that lets it catch Maxwell and Pascal before they hide their internal behavior.

Also there should be something in the Code that it should not use fallback solution. You get my Point?
I'm not sure how the code could have a command to block a fallback that didn't exist. It doesn't have a fallback to keep Maxwell and Pascall from tiling, and I think it's very likely at some low level in the driver/registry this option can be turned off.

And of cause maybe it looks diffrent in future. But it will Looks more like Nvidias solution than exactly like Fiji?!
It seems easier when designing a new fancy rasterizer to leave the older one in place. Why design a third not-fancy rasterizer to fall back to?
 
I dont get your point.

For my opinion it looks like TBR is off and fallback is on, because driver choose this. And if AMD will maybe switch on TBR in future, i think it will looks total different like it now looks. Now it looks exectly like FIJI and this indicates its using fallback intermediate renderer.

even if it were on, it can in certain cases still choose to use a more traditional rasterization method--that is the "dynamic" part of the design. It is up to the driver and GPU. The API does not care.
Also here my point is, this test is so simple, why the GPU choose her intermediate renderer?
 
For my opinion it looks like TBR is off and fallback is on, because driver choose this. And if AMD will maybe switch on TBR in future, i think it will looks total different like it now looks.
That still does not mean it will look like Maxwell and Pascal. Potentially, it won't look like Fiji or the Nvidia GPUs.

Also here my point is, this test is so simple, why the GPU choose her intermediate renderer?
The rule is that the binning rasterizer will remove pixels whose output does not make a difference.
This test specifically does something where each pixel is checking and setting a value that is accessed and modified by all the others.

I am speculating on this, but if the pixel's shader outputs can affect one another, it matters whether they are being culled by the rasterizer.
That might force the GPU to fall back.
The way the test is run might hit some other condition where the rasterizer will fall back.

The other scenario is that the rasterizer doesn't care. In that case, why wouldn't it gather up all the triangles, skip all but the last one, and spit out one color?
 
Thank you 3dilettante for your meanings. But if the rasterizer will fall back at such a simple task, you mean you need special software (Driver, Game,Bios?) that the rasterizer will not fallback?
 
Would be really nice if we could get confirmation that RX Vega will indeed be released at during SIGGRAPH, even if it's not specifically targetted during the talks. After all it's a professionals conference.

I found it interesting during the PC Gamer video Nick from AMD always stated "more information" at SIGGRAPH. Never really stating it would be released then. Though I had thought Lisa Su had stated at Computex that SIGGRAPH would be the release for RX Vega.
 
Thank you 3dilettante for your meanings. But if the rasterizer will fall back at such a simple task, you mean you need special software (Driver, Game,Bios?) that the rasterizer will not fallback?
AMD has already said that the benefits of the rasterizer would vary based on what the game is doing.
The rasterizer is trying to remove pixels that it knows will not affect the output, and will batch them as best it can in the limited space on-die.
Triangles that overlap are common in many titles, with the amount varying based on what the software is doing to sort them or cull non-visible geometry.

This test is a rare case, as it's not recommended to do what it does. It's not recommended because the API makes no promises about what happens and device-specific behavior may occur.
If a game had this behavior, it would be considered a bug. A GPU would be focused on workloads that aren't considered buggy.
 
I found it interesting during the PC Gamer video Nick from AMD always stated "more information" at SIGGRAPH. Never really stating it would be released then. Though I had thought Lisa Su had stated at Computex that SIGGRAPH would be the release for RX Vega.
AMD has now updated their invite to the more generic "cutting-edge Vega architecture."
The marketing struggle is real.
 
silent_guy said:
The marketing struggle is real.
Eh, if they are not willing to commit to a hard release date in the near future, it is the right move. They can't just say nothing at this point, and if they provide a concrete date too far in the future the community would crucify them for that too. At least this way, they throw people a bone, while avoiding making any promises or statements that could boomerang on them. Pretty smart. Would have been even smarter to do it before they began selling actual cards to the public though....
 
Last edited:
I think they're just too far behind at this point. Maybe they need to generate good revenue off Zen to fund the r&d for a new architecture that can compete and add enough resources to the team to actually pull it off.

Vega seems too hot, power hungry, big (meaning expensive to build), uses more expensive memory and importantly slower than the competitor. There are no real positives here meaning the only dial they have is price which is basically eating your own shirt at this point.

I don't expect nvidia to let off the gas so Volta will just be a bigger kick in the nuts.
 
Thank you 3dilettante for your meanings. But if the rasterizer will fall back at such a simple task, you mean you need special software (Driver, Game,Bios?) that the rasterizer will not fallback?
Technically the mentioned test even in NV case does not show tiled rasterizer in action. It shows a weird side effect of it. All triangles are still rasterized and pixels shaders executed and pixels drawn. As mentioned the test is effectively counting pixels (increases atomic value) and at some point when specified number of pixels is reached it starts killing off the pixels. Since you can vary at which pixel number the pixel is killed you can determine the order in which the rasterizer walks over the screen. And that order is "funky" in case of Maxwell/Pascal.
The test case itself is nothing that some sort of tiling would speed up. If you're rendering say 100 triangles and writing something per pixel with atomic value that's already a reason to turn any pixel killing optimizations off. What if developer wanted to count the pixels those 100 triangles are covering and doesn't care about the overdraw (because he specifically doesn't use Z-buffer)?

To actually check for this I think the only way would be to benchmark it with a very targeted test and look for "faster then possible" scenarios.
 
I may have been wrong about the area comparison.
Per PCPerspective, Vega's ~564 mm2. GP102 is significantly smaller for the performance it gets. It doesn't need to overclock to compete in a perf/mm2 basis.
What extra hardware does it have over Vega?

564mm²? Weren't there reports from the Sonoma RTG Summit of Vega being sub-500?

Found it:
https://www.golem.de/news/grafikchip-amd-zeigt-vega-10-und-erlaeutert-architektur-1701-125201.html
The relevant part is the update
Laut AMD etwas unter 500 Quadratmillimeter groß
Da AMDs Grafikchef Raja Koduri den Vega-10-Chip bereitwillig für Fotos hochhielt, können wir eine grobe Aussage zur Größe treffen: Die im 14LPP-Verfahren gefertigte GPU dürfte 500 bis 550 mm² aufweisen [Update: AMDs Raja Koduri sagte uns, Vega 10 sei etwas kleiner als 500 mm²].
english: AMDs Raja Koduri told us, Vega 10 would just a bit smaller than 500 mm².

So since >64mm² is not an amount an experienced website like PCPer would err on while measuring die size and golem.de would not lie about being told directly by Mr. Koduri, maybe there are indeed different Vega versions. Maybe one for gamers and one for professional market?
 
Last edited:
564mm²? Weren't there reports from the Sonoma RTG Summit of Vega being sub-500?

It's around 526 according to GamersNexus
  • 30mm x 30mm total size of interposer + GPU (does not include substrate)
  • 20.25mm x ~26mm GPU die size
  • 10mm x ~12mm HBM2 size (x2)
  • ~4mm package height (this is the one that’s least accurate, but gives a pretty good ballpark)
  • 64mm x 64mm mounting hole spacing (square, center-to-center)
  • PCB ~1mm
http://www.gamersnexus.net/news-pc/2972-amd-vega-frontier-edition-tear-down-die-size-and-more

900 mm² for the whole package.
 
Back
Top