AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Artistically this demo might be quite garish, but at least it does the job of showing off the kind of things only RT can do. In that reggard, I think that was the most un-umbiguous demos of that thus far. Like: Yes, the whole fucking screen can be covered with reflective surfaces of both specular and glossy variety and things will look right. That means artists can throw watever content at this and you will get proper reflections there.

I'd rather get ugly but didactic demos than "what am I supposed to se here" ones like Metro's.
Wasn't this supposed to be shown at GDC (before it dies, that is)?
 

Wait whaaaat! Can please someone explain something to me? At the end of the video, the two large curved surfaces on both sides of the robot. Why don't the reflexions on different segments align at all, when the surface does seem to align perfectly? That looks like cubemap reflexions, not like RT reflexions at all. Fishy as hell!
 
... nice dose of programmer porn. Although it's nothing new since Turing.

Seems AMD is interested in doing RT with simple shaders, preferably one single shader using DXR1.1, as opposed to the many shader approach in DXR1.0, AMD says single shader provides better performance. Microsoft says both approaches have their uses, advantages and disadvantages.
Did not get all the details about usage of inline tracing here, but i assume the proposed uber shader aproach is just the obvious optimization for reflections on any hardware.
Remembering high cost of RT in BFV, i assume it is because they have many different material shaders and hit shading ends up with very bad threads utilization because only few rays of a wavefront end up on the same material shader, and we get serialization instead parallelization. I guess they had no time to make an uber shader for all their content, and this was the reason they have less than one ray per pixel.
That's pure guessing, but even if a GPU has HW support to bin hits per material, uber shader for most stuff should end up faster. Ofc. uber shader causes more complexity and branches than multiple simpler shaders, and constrains content creation.

But aside of that, i would not wonder if NV and AMD have very different sweetspots in using inline vs. shader based RT.

Very intersting: SX supports tracing meshlets! :)

They must have messed up the upload or something as that video file they have is internally 24 fps for the AMD RDNA 2.0 RT demo.
Frame cadence is 3 frame repeat, 2 frame repeat, 3 frame repeat, 2 frame repeat.

uuh... let's hope AMD RT is not too 'cinematic' then.
 
They must have messed up the upload or something as that video file they have is internally 24 fps for the AMD RDNA 2.0 RT demo.
Frame cadence is 3 frame repeat, 2 frame repeat, 3 frame repeat, 2 frame repeat.

Bunch of demos like that lately, the footage from Minecraft pathtracing demo is also screwed up because they had an unlocked framerate, so skip skip skip.

I'm not sure these demos were necessarily meant to be recorded but the virus suddenly made it necessary, so a bunch of people without good demo recording knowledge (v-sync, keep your frame pacing!) suddenly have to record all this. Just a guess for this RT demo though.
 
Bunch of demos like that lately, the footage from Minecraft pathtracing demo is also screwed up because they had an unlocked framerate, so skip skip skip.

I'm not sure these demos were necessarily meant to be recorded but the virus suddenly made it necessary, so a bunch of people without good demo recording knowledge (v-sync, keep your frame pacing!) suddenly have to record all this. Just a guess for this RT demo though.
Yeah, all this stuff was definitely made for GDC, which went kaput.
At least they showed thingies.
 
Is that AMD RT video serious? It's not some subtle and knowing nod to the thread-of-gold on this site about chrome-plated T-Rexes?

Was the art director 12 years old or something?
I'm pretty sure they don't have art director and instead just let the coders put their vision on screen, no matter how horrible it is
 
I'm pretty sure there was two directives:
  1. Show off-screen stuff in reflections
  2. Make everything shiny and reflective. That spongebob episode in the future where everything is chrome? That's what we want.
 
They must have messed up the upload or something as that video file they have is internally 24 fps for the AMD RDNA 2.0 RT demo.
Frame cadence is 3 frame repeat, 2 frame repeat, 3 frame repeat, 2 frame repeat.

Plus or minus a few frames ... Is the demo on a RDNA 2.0 console or gpu?
We downloaded the video and slowly clicked through it, counting a framerate of around 26 FPS, with most frames (on the 1080p60 file) being repeated two or three times, sometimes more. That explains the stutter, but we're not sure whether it's because of the demo being very complex (AMD provided a lengthy list of all the ray tracing graphics effects being used), or if the hardware is running slower than we'll see in final GPUs, or perhaps YouTube is throttling image quality to preserve internet bandwidth during the current Coronavirus outbreak.

One thing we have learned from Nvidia's attempts to promote ray tracing over the past 18 months: It's very hard to come up with a 'must have' use case for the technology that doesn't tank performance on lesser GPUs.
https://www.tomshardware.com/news/amd-directx-raytracing-demo-rdna-2-gpu
 
A slide I thought was pertinent:

76-1260.cf74d530.jpg



AMD is rumored to be claiming an "IPC" increase, as well..

78-1260.3151eaa8.jpg
 
The XBox Series X intersection engine (RDNA2) can do 380 billion intersections per second. So how many rays per second can it do?
 
The XBox Series X intersection engine (RDNA2) can do 380 billion intersections per second. So how many rays per second can it do?

Just another "big number" that GPU vendors throw around. Gigarays a second, verts a second, whatever. Deducing any real world performance from "big number" is always just random feeling, it's not very useful beyond "sounding big!".
 
Just another "big number" that GPU vendors throw around. Gigarays a second, verts a second, whatever. Deducing any real world performance from "big number" is always just random feeling, it's not very useful beyond "sounding big!".

It means something if both gpus are essentially the same. It's like saying a gtx1080 isn't necessarily better than a gtx1070 just because it has bigger numbers. I do think games will scale very well between the two gpus, and with diminishing returns on resolution the differences will be harder to spot than ever.
 
Status
Not open for further replies.
Back
Top