AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Not natively, which is what I meant when I said application process. In the case of a Vega APU it should be able to access the memory controller on it's own to migrate pages. No different than a CPU core accessing memory. OS support shouldn't be required, but probably helps. The Nvidia solution required the CUDA runtime, which isn't that different from having the application page in memory as required. Not that different from letting the driver handle memory management. The exception, to my understanding, was the Power8 with NVLink which could handle the operation in hardware. Software vs hardware solutions to the same problem.
The OS manages all the process VAS. Migrating pages from a physical location to another while maintaining coherency cannot bypass the OS in any way.

Nvidia's solution (apparently) is not as simple as "requiring CUDA runtime". It likely involves an extra level of indirection through the GPU's own 49-bit VAS, which setup mirroring pages from the host VAS dynamically upon page fault in the low 48-bit VAS with the hardware assistance (page fault handling, page migration engine, etc). It cannot work efficiently without GPU hardware support, and the so-called OS support is to enable it for all OS-managed memory. On Linux, apparently it would be enabled through HMM.

AMD is unlikely to have a radically different approach.
 
Last edited:
Probly a 470. the 480 unless they do a serious respin wont go into laptops(you will much more much performance and lower power consumption with a 1070 or same performance with much lower power consumption with a 1060).
 
Probly a 470. the 480 unless they do a serious respin wont go into laptops(you will much more much performance and lower power consumption with a 1070 or same performance with much lower power consumption with a 1060).

Yeah, the more I think about it, the more I agree that it's probably a refreshed 470 that introduces an 8GB option. Adding an 8GB option to the 470 is a good "value-add" when you're trying to rebrand old-ish cards.
 
but using "Primitive Shaders" for example, that to my knowledge don't exist in any of the APIs or are supported by the competition, likely limit the use a bit. That statement seems more about it taking time for new techniques to really take hold. It's simply forward looking hardware with more capabilities than are currently practical.
AMD seems to be listening to developers. GPU-driven rendering needs more efficient culling.
Ubisoft: http://advances.realtimerendering.c...siggraph2015_combined_final_footer_220dpi.pdf
Frostbite: http://www.frostbite.com/2016/03/optimizing-the-graphics-pipeline-with-compute/

Current approaches (tessellation, geom shaders) can amplify and cull triangles, but these have a critical flaw: GPU always needs to transform 3 vertices per triangle. You lose index buffering benefits. Indexed geometry reaches 0.7 vertex per triangle transform rate (in best cases). Just pushing your triangles through tessellation / geometry shader without amplifying anything (same output) adds around 3x cost to geometry bound cases. What developers really want is an efficient way to reduce triangles (amplify isn't needed). This needs to be as fast as standard indexed geometry, when output is identical. Hopefully AMD delivers this. Future console games would certainly use it. But I certainly hope that improvements like this get adopted by other IHVs, just like Nvidia adapted Intel's PixelSync and Microsoft made it a DX12.1 core feature (ROV). Same for conservative raster.
Nvidia, AMD have introduct FP16-Int8, but they are not used in games, so in fact, this is a factor that we need yet to forget when speaking about gaming. some other features need to be set as specific path,, but thats additional features, its allready the case for both AMD and Nvidia: see GPUopen: http://gpuopen.com/

The good thing is, with Vulkan and DX12, developpers are more than never at the front and can use new techs and include them really fast
Most mobile games use double rate FP16 extensively. Both Unity and Unreal Engine are optimized for FP16 on mobiles. So far there hasn't been PC discrete cards that gained noticeable perf from FP16 support, so these optimizations are not yet enabled on PC. But things are changing. PS4 Pro already supports double rate FP16, and both of these engines support it. Vega is coming soon to consumer PCs with double rate FP16. Intel's Broadwell and Skylake iGPUs support double rate FP16 as well. Nvidia also supports it on P100 and on mobiles, but they have disabled support on consumer PC products. As soon as games show noticeable gains on competitor hardware from FP16, I am sure NV will enable it in future consumer products as well.
 
Last edited:
AMD really taped off all the vents in their demo rig fore VEGA.

Check Linus video showing development Golemit AM4 platform with RyZen and VEGA in action.

It looks like there is only 1 PCIe power cable going to VGA or though it is hard to tell for sure.
 
Last edited:
AMD really taped off all the vents in their demo rig fore VEGA.

Check Linus video showing development Golemit AM4 platform with RyZen and VEGA in action.

It looks like there is only 1 PCIe power cable going to VGA or though it is hard to tell for sure.

Yeah, they really went hardcore with that tape job.

I'm interested in the little "daughterboard" (or "extension") for monitoring/QA. Is that standard practice for all engineering samples? Linus's line of questioning made it sound like Linus assumed this was a new effort for Vega, but it seems like a novel technique that would be worthwhile for any GPU.
 
What surprises me the most is how "in development stage" Vega seems to be(debug boards, not yet decided final specs, not yet decided final performance targets...) and how close it is to launch...
 
Raja in the other video when asked if the doom demo was running in the top tier vega.


The chip is already done, the board, is used to just fine tune drivers and bios, pretty much its close to being done. Just tweaks here and there.

Prior to this, X *experimental boards* things look very different, it most likely would be water cooled, or a heavy fan and wires everywhere on the pcb.
 
Amd should have offer Keller a blank check. The duo keller/raja would have been orgasmic to watch.

Enviado desde mi HTC One mediante Tapatalk
 
AMD really taped off all the vents in their demo rig fore VEGA.

Check Linus video showing development Golemit AM4 platform with RyZen and VEGA in action.

It looks like there is only 1 PCIe power cable going to VGA or though it is hard to tell for sure.

Whats the point of taping everything and then allowing Linus to open the case and record closeups?!
 
Whats the point of taping everything and then allowing Linus to open the case and record closeups?!
The tape covering the slots at the back would be to prevent anyone peering in the back to see the power cables going into the GPU. From the look of the tape on the power cables, it doesn't look like it would stop anyone from seeing at that angle. I guess they didn't have slot covers so they had to tape it up there.

The point is that no one knows what the power requirements are and they didn't want the public making assumptions on what the power reqs are when it's an engineering card with no power profiling done yet.
 
Sure there is. I think it is implied that this time there is something special about its implementation on Vega
 
Back
Top