A1xLLcqAgt0qc2RyMz0y
Veteran
I wonder why so few reviewers even acknowledged (let alone study) the power saving mode of Vega 64.
Because who would buy a high-end GPU and then neuter performance.
I wonder why so few reviewers even acknowledged (let alone study) the power saving mode of Vega 64.
It's a metric that wants to be useful, but unless you equalize performance or wattage it's meaningless. A large chip could have the same perf/watt as a small one, but clocked a bit lower be way out in front in performance and perf/watt. Cut Vega down from even power saving and they'd likely be ahead of Pascal.Performance/watt is IMO one of the best performance metrics, since it enables you to compare how much each chip is doing useful work from end user perspective, per watt. You can then compare those to the theoretical numbers with ease, too.
Definitely some leakage issues, don't suppose anyone has tested with a thermal limit as opposed to power?Another intresting side effect of the LN2 is that VEGA's power draw fell of a cliff. Like 100W less than air cooled on +50% power limit. So the GPU core evidently is very happy with LN2 and could probably go well in excess of 2GHz with more voltage(I did all my testing at stock which is 1.2V).
https://cxzoid.blogspot.co.uk/2017/08/first-impressions-of-vega-fe-on-ln2.html?m=1
While it's not published (since that architecture section wasn't done in time), I did run that same benchmark. I'm getting much better numbers than that on Vega 64. About 20% better, to be precise.
Yep, that's been fixed. Same here on time constraints. Should I take it as a coincidence that Vega 64's effective texture bandwidth appears to have imrpoved by roughly the same ~60 GB/s that the raw memory bandwidth did?
I would like to point out though, that the AIDA-GPU-Benchmarks that have been shown below, were part of (our I guess?) Vega-FE-Review and not originally present in the RX Vega article. I've added them since, with the new and improved numbers for Vega RX - cannot rebench FE atm though.
At least the same ones who keep nagging about power consumption.Because who would buy a high-end GPU and then neuter performance.
There's an easily accessible power saving mode toggle in Nvidia drivers for Pascal cards?The most relevant power numbers are at the default setting (balanced I guess) where most users will remain. Once you start saying "but it can do this and this if you do this and this" well guess what you can do that stuff on other cards too.
The issue is the low power modes having little to no effect on performance. Balanced=Turbo-50W. Savings was costing 3fps while taking consumption even lower. Take a title Vega is strong in and perf/watt and equivalent performance likely exists with 1080.The most relevant power numbers are at the default setting (balanced I guess) where most users will remain. Once you start saying "but it can do this and this if you do this and this" well guess what you can do that stuff on other cards too. Since this is an infinitely deep rabbit hole reviewers have to stick with out of box experience barring some extraordinary circumstances.
Nope, they specifically use FP16 just for the Checkerboarding, base PS4 game don't use it due to lack of support. And the slide context was about 4K checkered as well.The slide said it was heavily used by the resolve pass, but the context was FP16 optimizations in general,
They were not outlined in that presentation, even though it covered several other optimization techniques, they only mentioned F16 within the context of their upscaling methodology. So 30% more performance for their upscaler only, the upscaling itself is only a small part of a full frame, so saving 30% of it's rendering time might only translates to a tiny fraction of performance improvement in the full frame.Many other areas to use FP16 not covered in that presentation that would be passing data to that shader. Not all of them would have been covered.
It is a shame, but I wouldn't get too attached to VP9. Only youtube uses it and it's not scaleable at all to the future, AV1 should be replacing it... (waits impatiently for the colossal slowness that is AV1...)
Yep, under Manage 3D Settings. Power Management Mode (Optimal Power (default), Adaptive, or Prefer Max. Performance).There's an easily accessible power saving mode toggle in Nvidia drivers for Pascal cards?
It's a metric that wants to be useful, but unless you equalize performance or wattage it's meaningless. A large chip could have the same perf/watt as a small one, but clocked a bit lower be way out in front in performance and perf/watt. Cut Vega down from even power saving and they'd likely be ahead of Pascal.
Definitely some leakage issues, don't suppose anyone has tested with a thermal limit as opposed to power?
I don't get why people care about 4k DXVA when madvr+ LAV offers image quality far superior to any DXVA solutions.It's clearly not a priority for them, which is a shame. I hope they have a better solution by the time they launch Raven Ridge.
As an aside, this was the tipping point that made me pull the trigger today on buying a custom OC'd 1080 @ $499 over waiting and trying to land a Vega 56 reference card (miner demand putting the cheapest 1070 @ $429 took that out of consideration). There were lots of factors that ultimately led to that decision, but Vega having hardware acceleration for VP9 profile 2 @ 4K would have been enough for me to be willing to wait. But adding nothing over Polaris when Nvidia have added both VP9 up to 8K (1060) and further added VP9 profile 2 up to 8K (1050/1030) to products released since was just one fail too many.
You mean nvidia runs their cards on power saving by default.Yep, under Manage 3D Settings. Power Management Mode (Optimal Power (default), Adaptive, or Prefer Max. Performance).
Sorry, no info on that. (But I'll see what I can find out)Anyone know if Vega ISA supports any level of PlayReady 3? Perhaps @Ryan Smith ?
Happens when you run a chip at 1.2V on a process that is supposed to be 0.80V nominal and recommended up to 0.945V overdrive.Definitely some leakage issues, don't suppose anyone has tested with a thermal limit as opposed to power?
Below is a quote from Overclockers UK's Gibbo, who reported that AMD's £449.99 price tag for their RX Vega 64 was "launch only" pricing. This price only applied to the retailer's initial allotment of RX Vega 64 Black standalone GPUs, with AMD giving gamers a £100 discount as some form of an "early adopter" discount.
I don't get why people care about 4k DXVA when madvr+ LAV offers image quality far superior to any DXVA solutions.
From Reddit:You mean nvidia runs their cards on power saving by default.
And what are the performance and power differences between those modes?
Prefer max performance: Locks the GPU into a higher voltage and higher clock state, your GPU will stay at its '3D application/game' clocks in all situations and not lower itself into an idle state
Adaptive: The GPU will reduce clock speeds and voltages when it isn't under heavy load, i.e. when browsing the web/watching a video
Optimal Power: Basically everything adaptive does, however if your GPU is doing nothing (i.e. on the desktop) Optimal power won't keep re-rendering the frame over and over, if it's the same frame it'll just pull it from memory and display that frame.
I'm surprised the 1.2V is even without the long term reliability limits of the process.Happens when you run a chip at 1.2V on a process that is supposed to be 0.80V nominal and recommended up to 0.945V overdrive.