AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

And this have bring ATI back in the market, or in the 9700 case, i should say make a powerfull entry on the market... I dont think that Carsten was think on "performance" and superior architecture (as was the 9700 ) but just that it will bring back sales to AMD and permit them to captures back some of their marketshare.. i dont want to speak for him off course and so this just reflect the way i have understand his post.
Almost right - First part is spot on, but I am not thinking, Vega is a generally superior architecture, but one that will be able to introduce new concepts and approaches that developers might want to explore. Which was also the case with R300. Some things in Volta too, that apparently appeal to developers. Not only in the professional space.

I find somewhat funny that the only group who really care about perf/watts are the miners, because this determine their benefitts and they mostly use AMD GPUs...
They do first and foremost care about the ROI time because mining is a very volatile market - and here, ex-cheap RX 470's were unbeatable. Plus, they require less manual tuning to get decent results out of for a variety of crypto algorithms but especially Ethereum. And hand-tuning can be a prohibitive factor when you're using dozens or hundreds of cards.
 
I find somewhat funny that the only group who really care about perf/watts are the miners, because this determine their benefitts and they mostly use AMD GPUs...
I think there are others that care about perf/W.
At least there's me, at least from a theoretical perspective, and I don't mine.
Data centers, businesses, HPC, laptops, casual overclockers that might want to know how easily they can add a few notches via overdrive, underclockers/undervolters that want a quieter PC, board partners that want to know how much work it takes to put out an OC edition, etc.
 
Can't wait for some independent FE benchmark reviews. It's too bad the AMD Internal benchmarks left a lot to be desired with regard to who the true competition will be.
 
I think there are others that care about perf/W.
At least there's me, at least from a theoretical perspective, and I don't mine.
Data centers, businesses, HPC, laptops, casual overclockers that might want to know how easily they can add a few notches via overdrive, underclockers/undervolters that want a quieter PC, board partners that want to know how much work it takes to put out an OC edition, etc.

Professional wise ...... things are so different.. the gpu's cpus you can seen are way different on the way they are used, as how we selll them to you ( underclocked, limited on tdp ) because efficiency.. .. laptops etc for battery, but in realty TDP have not move since 10years .. same battery capacity same tdp, more for the same ( and you dont really choose .. )... etc etc.

Basically better performance for the same TDP was the base . ( ofc for smartphone etc ), but then.. laptop have same battery,, better performance ... but still same battery ... same power electric need .. and offically a sllighty better battery life performance (1 hour more officially ) ...
 
Last edited:
So we may be getting AMD's equivalent of a 1080 in July/August of 2017? Prices aside the time gap for AMD to catch Nvidia on performance seems to be growing. A GTX 1170 may kill this in performance within 6 months..
 
We do not have visibility on what the compressors' compressed data actually is, so we do not know how often they compress data and how well they do. Even if a compressor is able to reach a higher average ratio in a no-spill case, if it's doing something like spilling halfway through then there is a constraint on its effectiveness that is not pattern related.

C'mon you arm-chain experts [this is with a loving friendly face actually]. Simply try to 'forget' a barrier for your color-targets and you have the raw color-compressed bits in front of you, for your reverse engineering skills to be harvested!

[edit: I used the plural just to be clear I don't want to pick you out particularly, and it's friendly dissing, all good I hope :]
 
Looks like where it was supposed to land, around 10% faster than a 1080FE, the gaming cards might push that upto 15-20%, now if it could clock high enough for custom cards to match the performance gains of Pascal cards.
 
Pardon my ignorance but is there anything to the theory that performance is hindered by that guys 550watt power supply? I thought if your video card under load drew more power than the power supply can supply 'bad things would happen'. (hard locks, resets, crashes...)
 
Pardon my ignorance but is there anything to the theory that performance is hindered by that guys 550watt power supply? I thought if your video card under load drew more power than the power supply can supply 'bad things would happen'. (hard locks, resets, crashes...)

No. He found settings for the card to keep the clock at 1.6GHz.
 
Ok, so this guy did some live testing with a 1200 PSU and R7 1800X, the card is throttling like crazy, it gets very hot quickly, you need to ramp the fan, he used like 2 external fans to keep it within check, anyway, here is a nice summary of the whole thing:


- Time Spy Graphics = 6785 pts
- Fire Strike Ultra Graphics = 5091 pts "roughly the same as the first leak"
- Doom Vulkan 4K Ultra @ 55-70 FPS
- The Witcher 3 @ 28-35 FPS at 4K Hairworks on / 41-42 FPS with Hairworks off (80 C running the game)
- Troubles OCing past 1650 MHz with the blower
- Tried to OC HBM, starts at 945 MHz, managed 960 MHz stable and at 980 MHz it got too hot
- Tester says you won't be able to OC without additional cooling, thermal throttling (80-85 C)
- Tester put 2x 80 mm fans to help (open air case) and it was down to 75 C
- Tester doesn't think it will touch the 1080 Ti performance-wise, only 1070-1080s
- Mining: 30-35 MH/s
- Cinebench R15 OpenGL: 97.39 FPS
- Initial testing done using Pro drivers, then switched to the Gaming drivers and performance didn't change
- Card initially operate at 1348-1528 MHz in gaming mode running The Witcher 3 (37-42 FPS), then after retesting it was back to 1650 MHz max (and delivered the same 42 FPS as Pro mode)
- >300W for the VGA alone
- EVGA P2 1200W Platinum Power Supply + Ryzen R7 1800X CPU

"copied after watching"

Edited for correction
 
Last edited:
Ok, so this guy did some live testing with a 1200 PSU and R7 1800X, the card is throttling like crazy, it gets very hot quickly, you need to ramp the fan, he used like 2 external fans to keep it within check, anyway, here is a nice summary of the whole thing:


- Time Spy Graphics = 6785 pts
- Fire Strike Ultra Graphics = 5091 pts "roughly the same as the first leak"
- Doom Vulkan 4K Ultra @ 55-70 FPS
- The Witcher 3 @ 28-35 FPS at 4K Hairworks on / 41-42 FPS with Hairworks off (80 C running the game)
- Troubles OCing past 1650 MHz with the blower
- Tried to OC HBM, starts at 945 MHz, managed 960 MHz stable and at 980 MHz it got too hot
- Tester says you won't be able to OC without additional cooling, thermal throttling (80-85 C)
- Tester put 2x 80 mm fans to help (open air case) and it was down to 75 C
- Tester doesn't think it will touch the 1080 Ti performance-wise, only 1070-1080s
- Mining: 30-35 MH/s
- Cinebench R15 OpenGL: 97.39 FPS
- Initial testing done using Pro drivers, then switched to the Gaming drivers and performance didn't change (37-42 FPS @ TW3 4K HW Off)
- Card operate at 1348-1528 MHz in gaming mode during The Witcher 3 testing
- >300W for the VGA alone
- EVGA P2 1200W Platinum Power Supply + Ryzen R7 1800X CPU

"copied after watching"


Don't act like they didn't warn everyone

db2056c0-484b-4fda-bfb8-76066ff5bf10.jpg
 
Aftermarket cooling will definitely help but perf/watt doesn't look that impressive right now and don't forget that Nvidia FE cards throttle and get to 80-85 C under load too. I'm also curious to see how this card performs with undervolting.
 
Last edited:
C'mon you arm-chain experts [this is with a loving friendly face actually]. Simply try to 'forget' a barrier for your color-targets and you have the raw color-compressed bits in front of you, for your reverse engineering skills to be harvested!

[edit: I used the plural just to be clear I don't want to pick you out particularly, and it's friendly dissing, all good I hope :]
The code might read something, although not knowing what the system is doing it might not be clear if it's it's going to be all that is needed to interpret the data or if it's the correct data.
Per AMD's DCC article, there would be some fighting with the driver/system, since it seems to be keeping track of possible accesses to compressed data and might decompress or change ratios to compensate.

Pardon my ignorance but is there anything to the theory that performance is hindered by that guys 550watt power supply? I thought if your video card under load drew more power than the power supply can supply 'bad things would happen'. (hard locks, resets, crashes...)
I think it would be more impressive if AMD's hardware could be that resilient in the face of voltage drops that it could downclock and idle enough that it could maintain stability when other hardware would shut down.
Unfortunately, the other side of the coin is that the card's high power draw means it is defying physics, or it is seriously inefficient.
More likely, the usual safety margin built into power supply recommendations is enough for the testing involved.
 
Poor volta or empty drums? I suppose amd can still save the day with superduper driver and/or new hw revision. I would still expect highest end retail vega model to reach 1080ti performance. It just might be liquid cooled and extremely factory overclocked which makes comparison to founders edition 1080ti quite unfair.
 
Maybe if AMD respun Vega on improved 14nm process and clocked it at 2Ghz, otherwise the 30-35% gap between a 1080 and 1080Ti is near impossible to bridge.
 
Don't act like they didn't warn everyone
Per https://forum.beyond3d.com/posts/1935963/
For now at least, it's looking a bit on the nose.

Poor volta or empty drums? I suppose amd can still save the day with superduper driver and/or new hw revision. I would still expect highest end retail vega model to reach 1080ti performance. It just might be liquid cooled and extremely factory overclocked which makes comparison to founders edition 1080ti quite unfair.
Possibly, there could be elements that might be toggled off.
Some things like the rasterizer do fall back to a standard mode, which at least in theory would have something like Fiji as their foundation. I'm withholding judgement overall, but it's at least not an encouraging data point.
 
Ok, so this guy did some live testing with a 1200 PSU and R7 1800X, the card is throttling like crazy, it gets very hot quickly, you need to ramp the fan, he used like 2 external fans to keep it within check, anyway, here is a nice summary of the whole thing:


- Time Spy Graphics = 6785 pts
- Fire Strike Ultra Graphics = 5091 pts "roughly the same as the first leak"
- Doom Vulkan 4K Ultra @ 55-70 FPS
- The Witcher 3 @ 28-35 FPS at 4K Hairworks on / 41-42 FPS with Hairworks off (80 C running the game)
- Troubles OCing past 1650 MHz with the blower
- Tried to OC HBM, starts at 945 MHz, managed 960 MHz stable and at 980 MHz it got too hot
- Tester says you won't be able to OC without additional cooling, thermal throttling (80-85 C)
- Tester put 2x 80 mm fans to help (open air case) and it was down to 75 C
- Tester doesn't think it will touch the 1080 Ti performance-wise, only 1070-1080s
- Mining: 30-35 MH/s
- Cinebench R15 OpenGL: 97.39 FPS
- Initial testing done using Pro drivers, then switched to the Gaming drivers and performance didn't change (37-42 FPS @ TW3 4K HW Off)
- Card operate at 1348-1528 MHz in gaming mode during The Witcher 3 testing
- >300W for the VGA alone
- EVGA P2 1200W Platinum Power Supply + Ryzen R7 1800X CPU

"copied after watching"

2 different dudes with results in same 1080 ballpark don't lie. Even if the Vega RX gaming version will be faster, it wont beat Pascal and for a product coming one year later, its a failure.
We are watching one of the worst launch in graphic cards history. After a year and half of Raja's hype (Poor Volta ?) and an epic disaster in product communication (the internet is all over the place with all hardware forums laughing and/or angry at AMD), maybe it's time for RTG group to be audited by an external party and change some people.

The interesting discussion now is to know why Vega doesn't perform at the expected level, especially with all the features that make this chip so big and hot. IMHO, this "fine wine" / "future proof" mindset is a disaster in terms of corporate and market share. That's great for devs (and maybe consoles) but RTG should adopt a more pragmatic approach where each new feature must improve performance first and foremost. For example, the decision to keep the same front end with only 4 shader engines is a mistake. Yes now it can use the ALUs to perform geometry tasks but it needs custom programming. Why not double the geometry engines and get more performance automatically in all apps ? (in geometry bound situations of course).
I always thought that AMD is too ambitious in term of design. With their limited resources, they should stick to a more "down to earth" attitude. It's like they aim to reach the moon with a balloon and some helium...
 
Last edited:
Maybe if AMD respun Vega on improved 14nm process and clocked it at 2Ghz, otherwise the 30-35% gap between a 1080 and 1080Ti is near impossible to bridge.
Tiled rasterization alone could do that if not currently enabled. The question is how many of the new features are currently enabled. All benches so far look like Fiji with higher clocks. That would seem to indicate none of the new features, except packed math and HBCC which AMD have demoed, are currently enabled.

What's kind of surprising is the 8Hi stacks aren't overheating as far as I can tell. Or they are and ECC is correcting.
 
I doubt that, I was expecting AMD to somehow get Vega on same footing wrt Pascal as 390X vs. 980Ti where the clockspeeds decide the difference since the rest of the stuff is pretty similar and AMD have relative parity in performance/TFLOPs. Doesn't look like they're even close.

edit: The scores are pretty close to 470/570 crossfire, I think that points to the card throttling a lot unless the dual front-ends make that big of a difference.

 
Last edited:
Back
Top