AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

What's special is apparently the fact that it's present on every Vega card that AMD uses internally, so that even driver/software developers can get detailed power feedback for every change they make to their software.
 
Sure there is. I think it is implied that this time there is something special about its implementation on Vega
It is really, really common to have a USB debugger connected to a microprocessor's serial debug bus on a development board, that can gain access to the internal states or whatever manipulation as designed. I sincerely doubt it is anything new at all...
 
Last edited:
What's special is apparently the fact that it's present on every Vega card that AMD uses internally, so that even driver/software developers can get detailed power feedback for every change they make to their software.
It is really, really common to have a USB debugger connected to a microprocessor's serial debug bus on a development board, that can gain access to the internal states or whatever manipulation as designed. I sincerely doubt it is anything new at all...
Do you guys know of any pictures of past engineering samples of cards with this kind of debug/QA stuff still built in?
 
AMD really taped off all the vents in their demo rig fore VEGA.

Check Linus video showing development Golemit AM4 platform with RyZen and VEGA in action.

It looks like there is only 1 PCIe power cable going to VGA or though it is hard to tell for sure.


What kind of power supply is it, then its easy to figure out if its just 1 connector to a 6 pin or 8 pin or 2 six pin vs 2 8 pin. To me it looks like 2 yellow and 2 black coming form the power supply.

Also in another part of the video if you look at 3:11 minute mark there is an another yellow and black cable which would be the same as the pci-e connector from the power supply, which can be seen in the back so it could be using two 8 pin or one 8 pin and one 6 pin, the front on coming from the power supply looks like 2 yellow in front, 2 yellow in back, and 2 black, which should be an 8 pin.
 
Last edited:
AMD really taped off all the vents in their demo rig fore VEGA.

Check Linus video showing development Golemit AM4 platform with RyZen and VEGA in action.

It looks like there is only 1 PCIe power cable going to VGA or though it is hard to tell for sure.

The 2 front fans are not tapped over. You clearly see the LED fans spinning and the whole front is not tapped over either. There is enough room for them to suck in air. In fact it looks like a quite good air duct, if the GPU is running a blower fan. Looking at the set-up, I am more impressed with Ryzen than with VEGA. The CPU cooler was rather small and it surely did get less air than the GPU.
 
What developers really want is an efficient way to reduce triangles (amplify isn't needed). This needs to be as fast as standard indexed geometry, when output is identical. Hopefully AMD delivers this. Future console games would certainly use it. But I certainly hope that improvements like this get adopted by other IHVs, just like Nvidia adapted Intel's PixelSync and Microsoft made it a DX12.1 core feature (ROV). Same for conservative raster.

Most mobile games use double rate FP16 extensively. Both Unity and Unreal Engine are optimized for FP16 on mobiles. So far there hasn't been PC discrete cards that gained noticeable perf from FP16 support, so these optimizations are not yet enabled on PC. But things are changing. PS4 Pro already supports double rate FP16, and both of these engines support it. Vega is coming soon to consumer PCs with double rate FP16. Intel's Broadwell and Skylake iGPUs support double rate FP16 as well. Nvidia also supports it on P100 and on mobiles, but they have disabled support on consumer PC products. As soon as games show noticeable gains on competitor hardware from FP16, I am sure NV will enable it in future consumer products as well.

Since this is quite likely included both in PS4 Pro and most likely in upcoming Xbox Scorpio, I would think/hope chances are very good that developers would use this in their titles and those who care would try to port it over to the PC instead of just „click recompile“. I haven't seen double rate FP16 made available in Intels public drivers yet, though.
 
It is really, really common to have a USB debugger connected to a microprocessor's serial debug bus on a development board, that can gain access to the internal states or whatever manipulation as designed. I sincerely doubt it is anything new at all...

Of course it being a USB interface isn't special. To spell it out, it could be something more than your average ICE/JTAG/Nexus ...

Not saying either that it is. Maybe Raja was ignorant when making those statements or he relied on the audience being ignorant and impressionable by common engineering practices.

I'd speculate that they could have worked on a more user friendly software over those debug interfaces (which I hope it's that uncommon either, but could significantly improve the life of the engineers which didn't have it until yesterday ). Or an alternative theory would be that while detailed chip info already was available, they're now making it available to more people or business entities.

It's all speculation, it could be nothing important, but simply dismissing Raja's statements as "every chip has a debug interface" seems unfair given the little info we've got
 
Last edited:
During an interview, Raja confirmed the mobile RX500 are just rebrands for notebook OEMs.


None of the Vega products quite match features attributed to Greenland via Linkedin or the HPC APU rumors. The supposed lead chip has 1/2 rate DPFP, with the "APU" using it having PCIe 3.0. I suppose there could be a wrinkle related to the CPU component that could explain that, although why AMD would hold back on double-precision for compute products if it were there is unclear.

Vega 20 says PCIe 4.0 host, meaning it might be just a 4-lane host for future NVMe SSDs. The current Fiji SSG uses two NVMe Samsung 950 Pro in RAID0 (theoretical 6.4GB/s read speeds) through a PLEX controller. A single PCIe 4.0 should be able to get about the same bandwidth with just one SSD.

The corresponding Greenland APU has the GPU connected to the CPU through GMI, and the CPU part being a 1st-gen Zen probably has a PCIe 3.0 host.


That is one of my biggest problem with AMD. When the sent the R390 vs. the 970/980 8GB was a must, when they sent Fiji against 980ti 4GB is enough, when they sent RX480 vs 1060 8GB is okay, 6 is not, when they sent Vega against GP102 suddenly 4 GB is enough and 8 plenty.
They claimed 4GB of very fast VRAM could overcome the seemingly lack of memory, specifically in the case of Fiji with HBM. This is something you can object to, but it's the whole statement.
I don't recall AMD claiming 8GB to be "a must" when the R390 was launched, and the same goes for AMD claiming 6GB is too low for the 1060.




Here's a rather depressing take on Vega:
http://techbuyersguru.com/ces-2017-amds-ryzen-and-vega-revealed?page=1

"Additinally, Scott made clear that this is very much a next-gen product, but that many of the cutting-edge features of Vega cannot be utilized natively by DX12, let alone DX11."

They're probably talking about 2*FP16 throughput, and the anandtech article said the exact same thing: FP16 will probably not be utilized in mainstream games during Vega 10's active life.

This doesn't mean it couldn't make it age better within 3 or 4 years after release.


Is Vega another 7970 that will take years before it's getting competitive?
Really? Haven't they learned anything?

You're mixing two different things. One is to launch a card whose features can be taken advantage of on day one of release, another is to launch a card that performs close to the competitors that sell for a similar price range.

Although not an obvious winner or loser the 7970 was competitive at launch. The fact that it climbed above GTX680's performance levels in titles that were released ~3 years later doesn't make it a worse card.
 
Last edited by a moderator:
Of course it being a USB interface isn't special. To spell it out, it could be something more than your average ICE/JTAG/Nexus ...

Not saying either that it is. Maybe Raja was ignorant when making those statements or he relied on the audience being ignorant and impressionable by common engineering practices.

I'd speculate that they could have worked on a more user friendly software over those debug interfaces (which I hope it's that uncommon either, but could significantly improve the life of the engineers which didn't have it until yesterday ). Or an alternative theory would be that while detailed chip info already was available, they're now making it available to more people or business entities.

It's all speculation, it could be nothing important, but simply dismissing Raja's statements as "every chip has a debug interface" seems unfair given the little info we've got
The point was that they have on-chip management unit for quite a long time already, which gathers this kind of information.
 
They claimed 4GB of very fast VRAM could overcome the seemingly lack of memory, specifically in the case of Fiji with HBM. This is something you can object to, but it's the whole statement.
When and where exactly have they (AMD) claimed that? As I remember it they claimed that there's quite a few spots where driver could better manage memory on Fury then on GCN1 or GCN2. Which is true. The idea that very fast VRAM could overcome lack of memory was pulled out of someones behind.
 
What AMD said is that games ask for enormous about of Vram to allocate but only use a portion of that. is that regard having less Vram and use the system ram and storage in a smarter way and with bigger bandwidth could be more efficient.

They said that most games even at 4K ultra only use 4GB of data actively so having doble or triple is not worth and if we keep this trend we will end up with dozens of GBs on cards just to use a portion. Raja also said that we want to focus in bandwidth and not amount of memory so maybe AMD want to move away from DDR to HBM?
 
When and where exactly have they (AMD) claimed that? As I remember it they claimed that there's quite a few spots where driver could better manage memory on Fury then on GCN1 or GCN2. Which is true. The idea that very fast VRAM could overcome lack of memory was pulled out of someones behind.
I remember it from an interview with Raja Koduri.

I think the logic was if you have enough memory bandwidth to spare then general performance won't be affected by more frequent memory swaps through the PCIe bus.
 
Regarding the sudden disappearance of HBM gen2 from SK Hynix' Databook: The new version Q1/17 is out and HBM gen2 made a re-appearance. Albeit only in it's slow version with 1,6 Gbps and with a maximum of 4 GByte Density. At least it's billed as available this quarter! Oh - and the name changed. From …-H20C and …-H12C, the former does not appear right now and the latter is named …H1K:

http://www.skhynix.com/static/filedata/fileDownload.do?seq=366
 
I like how raja gas become the Rockstar of pc hardware.

Enviado desde mi HTC One mediante Tapatalk
 
So by examining rectangular colored blocks you can claim with significant strength that nothing changed ;) . I see ..
Well, I didn't say nothing changed. What I say is these kind of information already exists as publicly shown by these shiny rectangular boxes, so it isn't surprising at all to have abilities to probe into the chip and intercept these signals or the on-chip microcontroller for bring-up or whatever. It is just that somebody happened to ask "what is this" in an interview, while generally no one ever bothers to pitch about it because it isn't quite relevant to the marketing of the end product at all.

So if you think you are sarcastic, maybe it is not. ;)
 
I remember it from an interview with Raja Koduri.

I think the logic was if you have enough memory bandwidth to spare then general performance won't be affected by more frequent memory swaps through the PCIe bus.
This logic is BS. I still maintain that someone just pulled it out of their behind or at least misinterpreted what was said by AMD. Therefore I'm interested in getting this interview. In my previous post I was referring to this. Specifically:
When I asked Macri about this issue, he expressed confidence in AMD's ability to work around this capacity constraint. In fact, he said that current GPUs aren't terribly efficient with their memory capacity simply because GDDR5's architecture required ever-larger memory capacities in order to extract more bandwidth. As a result, AMD "never bothered to put a single engineer on using frame buffer memory better," because memory capacities kept growing. Essentially, that capacity was free, while engineers were not. Macri classified the utilization of memory capacity in current Radeon operation as "exceedingly poor" and said the "amount of data that gets touched sitting in there is embarrassing."
As I mentioned above: things connected to delta compression.
Which then got translated by the online world int what you're saying in your post. Which is BS.
 
You should watch the interview and read the articles and previews

Enviado desde mi HTC One mediante Tapatalk
 
Back
Top