AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Since IF nodes effectively behaves same as an PCIe switch, it's pretty much safe to assume that both Vega 20 chips are also only connected to a single PCIe 3.0 16x port together.
Why is there a second PCIe physical connection in Duo's PCB then?
Are you suggesting it is exclusively for power? If so, I think apple wouldn't call it a PCIe connection..

These connectors on top of the GPU also render your assumptions about "20 lanes per GPU" somewhat moot. That's pretty obviously intended for a ring topology of 4 Vega 20 chips spread over 2 PCBs, with PCIe 3.0 16x on the socket each.
GPU-to-GPU connectivity doesn't prescind CPU-GPU conmectivity.
 
Why is there a second PCIe physical connection in Duo's PCB then?
Are you suggesting it is exclusively for power? If so, I think apple wouldn't call it a PCIe connection..
Actually, yes. I expect it is purely for power, after all that's 400W / ~10A more than a regular PCIe slot could safely handle. There is no other power connector on that PCB, and strictly splitting power delivery from data is a sensible choice. Guess the wide one is 12V rail, while GND is split beween original PCIe connector and the narrow one.

EDIT: Nope, looks like you are right. There are individual lanes on the second connector too. That doesn't add up in terms of PCIe lanes though. Even if it's the 64 lanes CPU, using all of the lanes just for GPU (in the 4x GPU configuration) would result in a major bottleneck for storage.

EDIT2:
In terms of advertised specs, that system appears to be designed for a throughput of <=180 fps @ 8k (frame rate wasn't specified, so it may actually just be 3x24fps, or worst case 3x60fps). That's just a single PCIe 3.0 16x port worth of bandwidth, uncompressed / lossless compression. Means, I do actually doubt there is more than 16 lanes per MPX module, simply because there is hardly any use case for streaming more than that off-GPU in uncompressed form. Respectively when abusing the 3 additional GPUs solely as decoder cards with a dedicated rendering card, even if you were to route all traffic via CPU instead of IF link, there is still sufficient throughput.

In terms of probable routing on the mainboard, the slots blocked by the MPX modules are probably 8x electrical, for a full 8x8 setup if no MPX modules are being used. Also speaks for the MPX modules only using a dedicated 16 lanes each.
 
Last edited:
In the end, I think this is a bit of a lost opportunity for apple to adopt Rome as it would be much better option for expandability than the rather old Skylake SP. I just don't know if AMD would be ready to provide the CPUs on time nor how much use apple gives to AVX-512.

I certainly could see a Zen-based Mac Pro in the future, but do you think that would've been feasible for this initial generation?

I don't think there was enough time to integrate & validate such a new product like Rome into a brand new device like the new Mac Pro.

Remember, we also didn't get Cascade-AP in there even though it would've been a wonderfully good fit in a single socket system with 300W of CPU cooling potential. I'd speculate that its omission was due to the same time-to-market limitations that caused Rome to not be considered.

I feel like we don't always have a good enough respect for the amount of design and validation effort required for these kinds of products.
 
Actually, yes. I expect it is purely for power, after all that's 400W / ~10A more than a regular PCIe slot could safely handle. There is no other power connector on that PCB, and strictly splitting power delivery from data is a sensible choice. Guess the wide one is 12V rail, while GND is split beween original PCIe connector and the narrow one.

EDIT: Nope, looks like you are right. There are individual lanes on the second connector too. That doesn't add up in terms of PCIe lanes though. Even if it's the 64 lanes CPU, using all of the lanes just for GPU (in the 4x GPU configuration) would result in a major bottleneck for storage.

EDIT2:
In terms of advertised specs, that system appears to be designed for a throughput of <=180 fps @ 8k (frame rate wasn't specified, so it may actually just be 3x24fps, or worst case 3x60fps). That's just a single PCIe 3.0 16x port worth of bandwidth, uncompressed / lossless compression. Means, I do actually doubt there is more than 16 lanes per MPX module, simply because there is hardly any use case for streaming more than that off-GPU in uncompressed form. Respectively when abusing the 3 additional GPUs solely as decoder cards with a dedicated rendering card, even if you were to route all traffic via CPU instead of IF link, there is still sufficient throughput.

In terms of probable routing on the mainboard, the slots blocked by the MPX modules are probably 8x electrical, for a full 8x8 setup if no MPX modules are being used. Also speaks for the MPX modules only using a dedicated 16 lanes each.


What does the PCIe4.0 bus do for reduced latencies for multiGPU..? And Win10 will be supporting mGPU in greater depth in the future.
 
Stepping aside from the Vega Duo discussion momentarily - would Rapid Packed Math, Draw Stream Binning Rasteriser, and/or Primitive shader features in Vega possibly see greater use as developers turn towards Navi? The failure of - for example - primitive shaders came as much from the failed promise of driver automation as much as anything else so far as I know, and Navi is equally dependant on games and programs being compiled for the feature as Vega is from what I gathered from Computex. Has there been any indication that architectural differences in these areas are too great to assume some base level of compatibility between Vega and RDNA?
 
From what I read, PS is abandonned on Vega, because of performances issues, while it will be working automatically on Navi. Wait&See of course... But we'll never have it on Vega imo they moved on.
 
I recall AMD's position being that they support it insofar as you code for it. Which no one has as of yet to my knowledge. Wolfenstein did PS, but it was done through shaders instead of any bespoke hardware. Though I didn't know there were performance issues so much as there were technical issues making it hard to get working properly. Be that as it may I can was hoping that the architectures were compatible enough to see some benefit to Vega. It's wishful thinking for sure, but if the investment cost is marginal you never know.
 
I recall AMD's position being that they support it insofar as you code for it. Which no one has as of yet to my knowledge. Wolfenstein did PS, but it was done through shaders instead of any bespoke hardware. Though I didn't know there were performance issues so much as there were technical issues making it hard to get working properly. Be that as it may I can was hoping that the architectures were compatible enough to see some benefit to Vega. It's wishful thinking for sure, but if the investment cost is marginal you never know.

Because in the end, they never exposed PS.
 
While I know it's leaning on the side of wishful thinking my initial thought was that considering the architectural similarities between Vega and Navi, and Navi requiring special consideration to make use of PS as well, it might be easy enough to expose it on Vega while they're at it. I don't have an insight into how much the arches differ on this point however.
 
While I know it's leaning on the side of wishful thinking my initial thought was that considering the architectural similarities between Vega and Navi, and Navi requiring special consideration to make use of PS as well, it might be easy enough to expose it on Vega while they're at it. I don't have an insight into how much the arches differ on this point however.

Nobody found weird that Navi presentation didn´t mention any of Vega most important features : like DBRS, Primitive Shader, HBCC, NGG fast path....? Look like they hit a dead end
 
Nobody found weird that Navi presentation didn´t mention any of Vega most important features : like DBRS, Primitive Shader, HBCC, NGG fast path....? Look like they hit a dead end
The whole presentation seemed pretty superficial, we'll get more when the reviews are out I'm sure
 
Stepping aside from the Vega Duo discussion momentarily - would Rapid Packed Math, Draw Stream Binning Rasteriser, and/or Primitive shader features in Vega possibly see greater use as developers turn towards Navi? The failure of - for example - primitive shaders came as much from the failed promise of driver automation as much as anything else so far as I know, and Navi is equally dependant on games and programs being compiled for the feature as Vega is from what I gathered from Computex. Has there been any indication that architectural differences in these areas are too great to assume some base level of compatibility between Vega and RDNA?
Rapid packed math is supported in Navi, and in general most GPUs are supporting some kind of packed FP16 instructions from multiple vendors.
The DSBR is theoretically enabled already, but by its design it's not meant to be "used" by developers. It's just meant to be active to improve efficiency automatically and transparently. The gains from it appear to be limited, though there may have been lost synergy with the missing primitive shaders.

I recall AMD's position being that they support it insofar as you code for it. Which no one has as of yet to my knowledge. Wolfenstein did PS, but it was done through shaders instead of any bespoke hardware. Though I didn't know there were performance issues so much as there were technical issues making it hard to get working properly. Be that as it may I can was hoping that the architectures were compatible enough to see some benefit to Vega. It's wishful thinking for sure, but if the investment cost is marginal you never know.

https://www.mail-archive.com/amd-gfx@lists.freedesktop.org/msg24458.html

Going by a request for information from last year, AMD's position is that primitive shaders will not come to GFX9.

Nobody found weird that Navi presentation didn´t mention any of Vega most important features : like DBRS, Primitive Shader, HBCC, NGG fast path....? Look like they hit a dead end
Primitive shaders were mentioned in the slides, and since they were part of the NGG concept it seems plausible that Navi has carried something forward for that. The DSBR hasn't been mentioned, but it's also something that would have benefited from primitive shaders and so might have a case for being in Navi because of them.
HBCC seemed like it had little importance to most games, although whether it's gone, relegated to a more professional product, or just considered old news isn't clear.
 
The whole presentation seemed pretty superficial, we'll get more when the reviews are out I'm sure

Sure, so was the first Vega presentation and yet they didn't forget to mention all new stuff

Well they talked about PS, no ? Like why it was not in effect for Vega but ok for Navi.

where ? I was watching whole video from the presentation and look at all ppt slides and there is no mention about such stuff. The only one who mentioned actually was Anandtech and it was unofficiall , so it might be their own invention just as Radeon VII 128ROP´s ....
 
Last edited:
Primitive Shaders were mentioned in the slides from Mike Mantor's and Andy Pomianowski's breakout session ("4 Prim shaders in, 8 Prim Shaders out"edit: the other way around: 4 out, 8 in), they also mentioned DSBR (verbally)edit:In the Fellow-Panel, not the breakout-session. More at launch, i guess.
 
Last edited:
they described just new geometry =
Primitive Shaders were mentioned in the slides from Mike Mantor's and Andy Pomianowski's breakout session ("4 Prim shaders in, 8 Prim Shaders out"), they also mentioned DSBR (verbally). More at launch, i guess.

4 primitive out , 8 primitive in strange number. Not sure if it shouldn´t be other way arround. Anyway it´s nowhere near of that 17 primitive Vega PS promised....

https://www.techpowerup.com/gpu-specs/docs/amd-vega-architecture.pdf
 
Back
Top