It is really, really common to have a USB debugger connected to a microprocessor's serial debug bus on a development board, that can gain access to the internal states or whatever manipulation as designed. I sincerely doubt it is anything new at all...Sure there is. I think it is implied that this time there is something special about its implementation on Vega
What's special is apparently the fact that it's present on every Vega card that AMD uses internally, so that even driver/software developers can get detailed power feedback for every change they make to their software.
Do you guys know of any pictures of past engineering samples of cards with this kind of debug/QA stuff still built in?It is really, really common to have a USB debugger connected to a microprocessor's serial debug bus on a development board, that can gain access to the internal states or whatever manipulation as designed. I sincerely doubt it is anything new at all...
AMD really taped off all the vents in their demo rig fore VEGA.
Check Linus video showing development Golemit AM4 platform with RyZen and VEGA in action.
It looks like there is only 1 PCIe power cable going to VGA or though it is hard to tell for sure.
AMD really taped off all the vents in their demo rig fore VEGA.
Check Linus video showing development Golemit AM4 platform with RyZen and VEGA in action.
It looks like there is only 1 PCIe power cable going to VGA or though it is hard to tell for sure.
What developers really want is an efficient way to reduce triangles (amplify isn't needed). This needs to be as fast as standard indexed geometry, when output is identical. Hopefully AMD delivers this. Future console games would certainly use it. But I certainly hope that improvements like this get adopted by other IHVs, just like Nvidia adapted Intel's PixelSync and Microsoft made it a DX12.1 core feature (ROV). Same for conservative raster.
Most mobile games use double rate FP16 extensively. Both Unity and Unreal Engine are optimized for FP16 on mobiles. So far there hasn't been PC discrete cards that gained noticeable perf from FP16 support, so these optimizations are not yet enabled on PC. But things are changing. PS4 Pro already supports double rate FP16, and both of these engines support it. Vega is coming soon to consumer PCs with double rate FP16. Intel's Broadwell and Skylake iGPUs support double rate FP16 as well. Nvidia also supports it on P100 and on mobiles, but they have disabled support on consumer PC products. As soon as games show noticeable gains on competitor hardware from FP16, I am sure NV will enable it in future consumer products as well.
It is really, really common to have a USB debugger connected to a microprocessor's serial debug bus on a development board, that can gain access to the internal states or whatever manipulation as designed. I sincerely doubt it is anything new at all...
None of the Vega products quite match features attributed to Greenland via Linkedin or the HPC APU rumors. The supposed lead chip has 1/2 rate DPFP, with the "APU" using it having PCIe 3.0. I suppose there could be a wrinkle related to the CPU component that could explain that, although why AMD would hold back on double-precision for compute products if it were there is unclear.
They claimed 4GB of very fast VRAM could overcome the seemingly lack of memory, specifically in the case of Fiji with HBM. This is something you can object to, but it's the whole statement.That is one of my biggest problem with AMD. When the sent the R390 vs. the 970/980 8GB was a must, when they sent Fiji against 980ti 4GB is enough, when they sent RX480 vs 1060 8GB is okay, 6 is not, when they sent Vega against GP102 suddenly 4 GB is enough and 8 plenty.
Here's a rather depressing take on Vega:
http://techbuyersguru.com/ces-2017-amds-ryzen-and-vega-revealed?page=1
"Additinally, Scott made clear that this is very much a next-gen product, but that many of the cutting-edge features of Vega cannot be utilized natively by DX12, let alone DX11."
Is Vega another 7970 that will take years before it's getting competitive?
Really? Haven't they learned anything?
The point was that they have on-chip management unit for quite a long time already, which gathers this kind of information.Of course it being a USB interface isn't special. To spell it out, it could be something more than your average ICE/JTAG/Nexus ...
Not saying either that it is. Maybe Raja was ignorant when making those statements or he relied on the audience being ignorant and impressionable by common engineering practices.
I'd speculate that they could have worked on a more user friendly software over those debug interfaces (which I hope it's that uncommon either, but could significantly improve the life of the engineers which didn't have it until yesterday ). Or an alternative theory would be that while detailed chip info already was available, they're now making it available to more people or business entities.
It's all speculation, it could be nothing important, but simply dismissing Raja's statements as "every chip has a debug interface" seems unfair given the little info we've got
When and where exactly have they (AMD) claimed that? As I remember it they claimed that there's quite a few spots where driver could better manage memory on Fury then on GCN1 or GCN2. Which is true. The idea that very fast VRAM could overcome lack of memory was pulled out of someones behind.They claimed 4GB of very fast VRAM could overcome the seemingly lack of memory, specifically in the case of Fiji with HBM. This is something you can object to, but it's the whole statement.
I remember it from an interview with Raja Koduri.When and where exactly have they (AMD) claimed that? As I remember it they claimed that there's quite a few spots where driver could better manage memory on Fury then on GCN1 or GCN2. Which is true. The idea that very fast VRAM could overcome lack of memory was pulled out of someones behind.
The point was that they have on-chip management unit for quite a long time already, which gathers this kind of information.
Well, I didn't say nothing changed. What I say is these kind of information already exists as publicly shown by these shiny rectangular boxes, so it isn't surprising at all to have abilities to probe into the chip and intercept these signals or the on-chip microcontroller for bring-up or whatever. It is just that somebody happened to ask "what is this" in an interview, while generally no one ever bothers to pitch about it because it isn't quite relevant to the marketing of the end product at all.So by examining rectangular colored blocks you can claim with significant strength that nothing changed . I see ..
This logic is BS. I still maintain that someone just pulled it out of their behind or at least misinterpreted what was said by AMD. Therefore I'm interested in getting this interview. In my previous post I was referring to this. Specifically:I remember it from an interview with Raja Koduri.
I think the logic was if you have enough memory bandwidth to spare then general performance won't be affected by more frequent memory swaps through the PCIe bus.
As I mentioned above: things connected to delta compression.When I asked Macri about this issue, he expressed confidence in AMD's ability to work around this capacity constraint. In fact, he said that current GPUs aren't terribly efficient with their memory capacity simply because GDDR5's architecture required ever-larger memory capacities in order to extract more bandwidth. As a result, AMD "never bothered to put a single engineer on using frame buffer memory better," because memory capacities kept growing. Essentially, that capacity was free, while engineers were not. Macri classified the utilization of memory capacity in current Radeon operation as "exceedingly poor" and said the "amount of data that gets touched sitting in there is embarrassing."