Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Okay, so we might get some really good dynamic VRS replacing the coarse method of dynamic resolution to get stable frame rate.You could probably force it, compression style (maximum bitrate), to apply enough to get the performance gains needed for a framerate. You'll run a pass for low detail where VRS is a good fit, and if there's not enough low-detail area, change the threshold until there is. Thus getting the equivalent of macroblocking with shader detail. Maybe a big explosion blurs out the detail a bit, and on dense foliage, the foliage detail is just reduced, then increased on the same assets when there are less of them. You could also just up VRS in the periphery, keeping everything centre-screen sharp and reducing detail towards the edges, foveated-rendering style.
Summary? I didn’t get anything from the abstract
Or a hybrid solution mixing both methods.Okay, so we might get some really good dynamic VRS replacing the coarse method of dynamic resolution to get stable frame rate.
Compute tunnelling?Summary? I didn’t get anything from the abstract
I'm curious how VRS will impact unstable frame rates. If the screen is filled with fine foliage, there's no area of the screen to rate down, so is VRS doing nothing for some framings and do great for others?
You could probably force it, compression style (maximum bitrate), to apply enough to get the performance gains needed for a framerate. You'll run a pass for low detail where VRS is a good fit, and if there's not enough low-detail area, change the threshold until there is. Thus getting the equivalent of macroblocking with shader detail. Maybe a big explosion blurs out the detail a bit, and on dense foliage, the foliage detail is just reduced, then increased on the same assets when there are less of them. You could also just up VRS in the periphery, keeping everything centre-screen sharp and reducing detail towards the edges, foveated-rendering style.
umm... after trying to decypher some sense out of it, i want to forward the question to the experts here.Summary? I didn’t get anything from the abstract
Looks like quite a few of patents invented by Ivan Nevraev for Microsoft just went active today.
https://patents.google.com/?inventor=Ivan+Nevraev&after=priority:20160101
Anything of interest? Asking those with more technical know-how than me.
I would say that the patents co-authored with Mark S. Grossman are a safe bet as intended for consoles. He's the Xbox chief GPU architect.Interesting, there's quite a variety of things in there. Various VRS patents, a game streaming related patent (latency), RT related patents, graphics development related patents, a GPU related patent.
Interesting stuff, but I'll leave it for someone more technical to try to determine what might or might not be applicable to consoles.
Regards,
SB
It Just Works™ ?Compute tunnelling?
It's been some time since then, so I've probably forgotten many things, but which benchmarks or metrics had a 5x lead? There were some specific use cases like double-precision that I can remember, although that would understandably be of little concern outside of compute like HPC--where AMD's lack of a software foundation negated even leads like that.Yeah. To me GCN was even five times faster than Kepler in compute. Just nobody talked about it, not even AMD themselves it seemed. When did you ever see a 5x lead over the competition? Never. And today all we hear is how much 'behind' AMD is.
To me GCN is the best GPU architecture ever made, and its drawn power translates to performance. I think AMD does big changes less often, but if they do there is a good chance they take the lead for some time.
The benchmark is my own work on realtime GI. The workloads are breadth first traversals of BVH / point hierarchy, raytracing for visibility, building acceleration structures. But it's not comparable to classic raytracing. Complexity is much higher and random access is mostly avoided. The general structure of programs is load from memory, heavy processing using LDS, write to memory. Rarely i access memory during the processing phase, and there is a lot of integer math, scan algorithms, also a lot of bit packing to reduce LDS. Occupancy is good, overall 70-80%. It's compute only - no rasterization or texture sampling.It's been some time since then, so I've probably forgotten many things, but which benchmarks or metrics had a 5x lead? There were some specific use cases like double-precision that I can remember, although that would understandably be of little concern outside of compute like HPC--where AMD's lack of a software foundation negated even leads like that.
AMD has been looking at this a while. This is a paper from 2014 in which they propose modifying the ALUs for only a 4-8% area increase. The propose 4 traversal units per CU. This is a 1-for-1 match to the number of TMUs, which is exactly where the hardware resides in more recent AMD patents on ray tracing.It is AMD's first RT implementation so wouldn't be surprised.
We propose a high performance, GPU integrated, hardware ray tracing system. We present and make use of a new analysis of ray traversal in axis aligned bounding volume hierarchies. This analysis enables compact traversal hardware through the use of reduced precision arithmetic. We also propose a new cache based technique for scheduling ray traversal. With the addition of our compact fixed function traversal unit and cache mechanism, we show that current GPU architectures are well suited for hardware accelerated ray tracing, requiring only small modifications to provide high performance. By making use of existing GPU resources we are able to keep all rays and scheduling traffic on chip and out of caches.
Reminds me about this paper, which also took this architecture as example: https://pdfs.semanticscholar.org/26ef/909381d93060f626231fe7560a5636a947cd.pdfThis is proposed changes to Hawaii (R-290X). With Navi's enhanced caches, I would think it's already more suitable to the modifications described.
Yeah, they have tons of RT experience, and more research / software experience than AMD in general.I bet that NV was 'looking at it for a while' before going into production with their RT.