When will Third Party Vega Boards be available?

When do you think Third Party Vega boards will be available?

  • Before New Years 2017

    Votes: 0 0.0%
  • Before end of Second Quarter 2018

    Votes: 0 0.0%

  • Total voters
    25
So the chip does have issues with power consumption for one reason or another, be it silicon process-related, architecture, bugs/errata or whatever.
Performance issues more than power issues I'd say. Most/all of the missing features are all stated as higher FPS at similar power draw. Use one of the recent Vulkan titles as a baseline and numbers aren't all that bad.

It's simply bigger.
Much, much bigger.
Like 1.5k ALUs more.
Bigger should use less power at comparable performance levels. More ALUs at lower clocks use less power while performing similar work. Power curves become rather advantageous doing that.
 
Certain Swedish dealer websites now claim 3rd party Vegas will arrive in stock on/around december 13th. No idea if this is placeholder information, but according to a mate of mine who worked many years at a PC dealer it should be a fairly accurate estimated delivery date given by suppliers.

I'll believe it when I see it, but if this is true I'll be a happy camper... :p
 
Certain Swedish dealer websites now claim 3rd party Vegas will arrive in stock on/around december 13th. No idea if this is placeholder information, but according to a mate of mine who worked many years at a PC dealer it should be a fairly accurate estimated delivery date given by suppliers.

I'll believe it when I see it, but if this is true I'll be a happy camper... :p
Germans are also "something something shipping soon™".
 
Certain Swedish dealer websites now claim 3rd party Vegas will arrive in stock on/around december 13th. No idea if this is placeholder information, but according to a mate of mine who worked many years at a PC dealer it should be a fairly accurate estimated delivery date given by suppliers.

I'll believe it when I see it, but if this is true I'll be a happy camper... :p
Also depends if they launch with say 30 cards and more next year or able to really launch in December; fingers crossed for the latter but one does need tempered expectations with Vega launch.

One of the biggest headaches I see for AIB partners is the requirement of a very specialist thermal dynamics/fluid modeller-simulator to calculate thermal dissipation and the complexity of the 8-hi or in this situation different packages that can be shipped as noted by a few publications along with implementing different cooling solutions for these custom models; impact more at the logic/lowest DRAM die and its thermal spread along with considering GPU spec (clocks/voltages/power/fan profile-quiet mode/etc) as implemented.
I doubt the AIB partners have this level of speciality and it is far from cheap; the software and experienced engineers to use it in this field and will need to rely some on AMD.

Edit:
Example would be COMSOL with the multiphysics platform and heat transfer modeling but at silicon engineering detail requiring specific in-depth data from whoever provided the HBM and also the packaging with all dies: so one reason I see this more coming from AMD and SK Hynix/Samsung along with the engineer expertise at this level of detail.
https://www.comsol.com/heat-transfer-module
 
Last edited:
Also depends if they launch with say 30 cards and more next year or able to really launch in December; fingers crossed for the latter but one does need tempered expectations with Vega launch.
Well, to be fair this isn't vega launch launch. It's just 3rd party vega launch; it's been months so there should reasonably be product available in the channel already.

One of the biggest headaches I see for AIB partners is the requirement of a very specialist thermal dynamics/fluid modeller-simulator to calculate thermal dissipation and the complexity of the 8-hi or in this situation different packages that can be shipped as noted by a few publications along with implementing different cooling solutions for these custom models; impact more at the logic/lowest DRAM die and its thermal spread along with considering GPU spec (clocks/voltages/power/fan profile-quiet mode/etc) as implemented.
A: that's an impressive run-on sentence, and...
B: I haven't the faintest idea WTH you're talking about! :)

How about manufacturers slapping a heatsink on the assembly just like they've always done with every other GPU since the Riva TNT at least? I don't see what would be so different here. They're not the developers of the hardware, they're buying a finished product with known specs which they'll adapt to. There's nothing to simulate here. Thermal output is a known factor by the time comes to build cards.
 
How about manufacturers slapping a heatsink on the assembly just like they've always done with every other GPU since the Riva TNT at least? I don't see what would be so different here. They're not the developers of the hardware, they're buying a finished product with known specs which they'll adapt to. There's nothing to simulate here. Thermal output is a known factor by the time comes to build cards.
He's suggesting AIBs need a thermal model because the HBM heights vary. I'd agree with the slap a heatsink on it. Even if the thermal model changes, how much does the cooling solution vary anyways? Etching the contact plate to accommodate the higher ram would be sufficient. Heat pipe setups aren't all that complex in design. It'd be the equivalent of adding a second loop to a single water block.
 
Well, to be fair this isn't vega launch launch. It's just 3rd party vega launch; it's been months so there should reasonably be product available in the channel already.


A: that's an impressive run-on sentence, and...
B: I haven't the faintest idea WTH you're talking about! :)

How about manufacturers slapping a heatsink on the assembly just like they've always done with every other GPU since the Riva TNT at least? I don't see what would be so different here. They're not the developers of the hardware, they're buying a finished product with known specs which they'll adapt to. There's nothing to simulate here. Thermal output is a known factor by the time comes to build cards.

Yeah I tend to agree and why I am not that negative but hopeful.
Next part is that thermal dissipation at an engineering level usually involves very expensive modelling/simulator software such as COMSOL, there are others but I used this as an example as it is pretty well known by quite a lot of engineers.
I take it you clicked on the COMSOL link I provided?
In a way it is a bit like SPICE simulating ICs before production, but instead it is focused on the physics aspect such as thermal dynamics/fatigue/etc.
Now I bet not all AIB partners use SPICE although it could still have benefits, but it sure would be used earlier in the chain.
https://en.wikipedia.org/wiki/SPICE

There is more involved than just high level slap on a cooler when it comes to a large die package with HBM2 and different height packages, one needs to understand exactly the thermal properties/fatigue of the components and characteristics based upon a design and solution and importantly the spec it is meant to work within.
A spec sheet may give you the behaviour of a capacitor/MOSFET/etc, but you need something like COMSOL to see and understand how this influences your design, and it becomes more complex with HBM-GPU packages/power delivery-VRM/etc reasonably close to each other.
Like I said I am sure AMD would use such tools along with SK Hynix/Samsung, but this level of expertise is probably missing with AIB partners who will have more simplified process.
The strain on HBM in this context is the logic die and bottom DRAM die (especially 8-hi with the 16GB product and yeah I know that is not this Vega product but just saying), this heat dissipation also spreads out across the board, and may be exacerbated a little by packages that have different heights; you would need something like COMSOL to actually note the influence this has and what materials/thickness/gap combined with the cooling solution are the most ideal.
Or one takes much more time and goes with more trial by error (one reason I mentioned it albeit briefly with my post in context of delays), hence the headache because no company wants to end up with releasing a product that fails down the line and costs money in RMAs due to custom models not being the same as a reference in design and importantly performance envelope used.
Or release a custom model that performs/behaves no better than the reference.

Edit:
Pretty sure if EVGA used a solution like COMSOL they would not had omitted the thermal pads on their initial GTX1080 FTW :)
However one cannot assume doing something similar will work for the HBM-GPU package as the heat challenge is close to the interposer/substrate - just to emphasise I do not see it as an issue if dealt with properly and I am pretty sure it would be.
 
Last edited:
He's suggesting AIBs need a thermal model because the HBM heights vary.
ASUS isn't doing that even, they're using the exact same cooler as for their Geforce 1080Ti model with a completely flat hotplate. Maybe the springs holding the cooler down to the card are different to account for the presence of the interposer, no idea.

As AMD apparently can ship you chips from several sources with slightly varying HBM height, if you were to depend on modifying the hotplate to accomodate for this you'd need to special-order coolers according to whatever batches of chips that come in, greatly delaying and complicating manufacturing compared to just using one single design across the board.

As HBM doesn't dissipate that much heat the benefit of exactly matching height might be very minimal anyway. Apparently it's on the order of some hundredths of a millimeter difference.
 
Proshop; a Danish internet mail-order company, offers ASUS Strix Vega for pre-purchase. It costs SKR 8000, which is roughly €800. /facepalm

That's the price of a 1080Ti... Ugh. I sure hope it is a preliminary price.
 
Proshop; a Danish internet mail-order company, offers ASUS Strix Vega for pre-purchase. It costs SKR 8000, which is roughly €800. /facepalm

That's the price of a 1080Ti... Ugh. I sure hope it is a preliminary price.

Yeah, and it costs €799.90 in the euro zone. Those Adrenalin drivers had better be pretty magical to justify that price.
 
Why the hell shitty (like actually shitty) STRIX costs more than THICC (and decent) Red Devil?
Yeah, and it costs €799.90 in the euro zone. Those Adrenalin drivers had better be pretty magical to justify that price.
Gonna be silly either way.
 
ASUS isn't doing that even, they're using the exact same cooler as for their Geforce 1080Ti model with a completely flat hotplate. Maybe the springs holding the cooler down to the card are different to account for the presence of the interposer, no idea.

As AMD apparently can ship you chips from several sources with slightly varying HBM height, if you were to depend on modifying the hotplate to accomodate for this you'd need to special-order coolers according to whatever batches of chips that come in, greatly delaying and complicating manufacturing compared to just using one single design across the board.

As HBM doesn't dissipate that much heat the benefit of exactly matching height might be very minimal anyway. Apparently it's on the order of some hundredths of a millimeter difference.

HBM2 and VRMs dissipate a heck of a lot of heat.
You cannot go by the temperature at the top of the stack nor the temperature measured at the backplate for HBM2 but need to look at it at the logic die and DRAM die at the bottom, now raise the HBM2 clock (if this is something custom AIBs model is designed for) and you really need to model the thermal characteristics or take quite a lot of time trial and error with a simplified process.
Or provide a quieter fan and again you need to take into account the behaviour/properties with the cooling solution used.

Anyway the HBM2 memory and VRM/doubler are hotter than the GPU die itself, even when all are OC'd or with raised power.

It will be interesting to see what one gets with the Asus Vega custom model in terms of performance/behaviour over the AMD reference design if they are using the exact same cooler as on the 1080ti, depending of course what other modifications they make underneath.
 
Last edited:
I wonder at what point what exactly went wrong with HBM, given it's promise of great power saving potential. It cannot be so simple as top-level engineers not seeing that heat sources move a lot closer together compared to GPU+GDDR distributed over a much larger area.
 
HBM2 and VRMs dissipate a heck of a lot of heat.
You cannot go by the temperature at the top of the stack nor the temperature measured at the backplate for HBM2 but need to look at it at the logic die and DRAM die at the bottom, now raise the HBM2 clock (if this is something custom AIBs model is designed for) and you really need to model the thermal characteristics or take quite a lot of time trial and error with a simplified process.
Or provide a quieter fan and again you need to take into account the behaviour/properties with the cooling solution used.

Anyway the HBM2 memory and VRM/doubler are hotter than the GPU die itself, even when all are OC'd or with raised power.

It will be interesting to see what one gets with the Asus Vega custom model in terms of performance/behaviour over the AMD reference design if they are using the exact same cooler as on the 1080ti, depending of course what other modifications they make underneath.

I wouldn't put the HBM dies in the same boat as VRMs & GPU. Not sure why you do so. HBM dies' high temps rather indicate a cooling issue rather than excessive power draw from that specific segment.

The HBM die just happens to be in the vecinity of a GPU die which draws more than it was anticipated.
 
I wonder at what point what exactly went wrong with HBM, given it's promise of great power saving potential. It cannot be so simple as top-level engineers not seeing that heat sources move a lot closer together compared to GPU+GDDR distributed over a much larger area.
Power saving comes mostly from controllers.
 
Proshop; a Danish internet mail-order company, offers ASUS Strix Vega for pre-purchase. It costs SKR 8000, which is roughly €800. /facepalm

That's the price of a 1080Ti... Ugh. I sure hope it is a preliminary price.

That's the price of a 1080Ti plus waterblock here in the States. Crazy.
 
ASUS isn't doing that even, they're using the exact same cooler as for their Geforce 1080Ti model with a completely flat hotplate. Maybe the springs holding the cooler down to the card are different to account for the presence of the interposer, no idea.

As AMD apparently can ship you chips from several sources with slightly varying HBM height, if you were to depend on modifying the hotplate to accomodate for this you'd need to special-order coolers according to whatever batches of chips that come in, greatly delaying and complicating manufacturing compared to just using one single design across the board.

As HBM doesn't dissipate that much heat the benefit of exactly matching height might be very minimal anyway. Apparently it's on the order of some hundredths of a millimeter difference.

I'm imagining very thick, spongey TIMs to account for the variable interface heights.
 
I wouldn't put the HBM dies in the same boat as VRMs & GPU. Not sure why you do so. HBM dies' high temps rather indicate a cooling issue rather than excessive power draw from that specific segment.

The HBM die just happens to be in the vecinity of a GPU die which draws more than it was anticipated.
You can say that about any component, whether VRMs or HBM or GPU die cooled properly and there are constraints involved in any consumer product while less so in the more expensive prosumer/professional/HPC segments.
I put it in the same boat because the temp of HBM2 at the logic core/lowest DRAM will be higher than the GPU in probably most implementations.
Doubler>VRM generally>HBM2 (at base)>GPU die.

Actual measurements and I think with additional modelling-simulation were done by one of the processor companies with HBM2, specifically looking at in the context I mention, which sort of matches what has been seen by those reviewing the AMD Vega GPUs.
It comes down to where you measure the temp for HBM2, and yeah like you say implementation which tbh is what I have been saying all along but it does have thermal dissipation considerations beyond normal due to the 2.5d/3d design and packaging; anyway the thermal dissipation behaviour is not the same as a GPU die or standard GDDR5 memory but this is digressing from my original point and context.
And like I said, not an issue if done correctly OR used within reference spec/characteristics but more difficult without access to one of the engineering simulation/model softwares such as COMSOL and a senior engineer in that field; context as I mentioned going away from reference and building ones own custom model with its own performance envelope among other factors.
 
Last edited:
Back
Top