Likely human effect. People get so used faked scenes that look good but have no comparison to something significantly more accurate. They don’t see a big difference because they are used to seeing the fake. In reverse, if you are used to seeing a correct image, and then take away everything, it’s much more noticeable.
TLDR; sometimes harder to see additions than it is to see removal.
This is what you wanted:I thought the way they were demoing RTX was complete shit until they showed Battlefield 5. I mean, with the Metro demo where they were showing GI, it would have been a lot more interesting to show it with the light moving around, or with the windowing opening in the wall moving around, or adding and removing window openings. The whole point of real-time GI is for it to be dynamic, otherwise it's something games fake pretty well right now. The Battlefield demo was the best because it was dynamic, and reflections are a very easy selling point because reflections are faked poorly.
I thought the way they were demoing RTX was complete shit until they showed Battlefield 5. I mean, with the Metro demo where they were showing GI, it would have been a lot more interesting to show it with the light moving around, or with the windowing opening in the wall moving around, or adding and removing window openings. The whole point of real-time GI is for it to be dynamic, otherwise it's something games fake pretty well right now. The Battlefield demo was the best because it was dynamic, and reflections are a very easy selling point because reflections are faked poorly.
Lol yea. Once you are used to a standard it’s hard to go down. Easy to go up.Oh I could see the differences. It didn't really change the mood of the scene or take me out of the experience. However, you're completely correct about the reverse. Once these things become commonplace, we won't want to go back.
TSMC 12nm, the leaps they're advertising are just the addition of new types of specialized cores and using examples where they are utilizedWhat process are these chips using? These leaps suggest 7nm?
I'm wondering if it would be possible to have the rays collect information about whatever they hit (material index, world position normal, etc...), organize that data by material and then shade via compute shaders instead. That way you wouldn't trash the cache every time you trace a ray.Oh btw, for anyone wondering about the Metro Exodus demo... yeah this is what I was saying about being shading bound. Basically GPUs rely on shading everything similarly because they only have to shade things that are very close together, that rock in a forest is close to that other rock when you look at it in game, so the shading (what lights hit it, etc.) is going to be very similar for each rock. So you can group your shading hardware together as well, making it fast and cheap.
Thing is with what raytracing is good for, it's good for shooting rays off in random directions. You then shade whatever the ray "hits" but these random directions aren't close together. Meaning they aren't shaded similarly, meaning GPUs aren't really good at this part. For global illumination, quite frankly the biggest (added) effect I can think of that RT will great at, the farther the rays go the less similar they are. So you can do very short range kind of meh GI like in Metro Exodus, and/or very neat, shiny short range reflections like in Battlefield (they're shiny because shiny reflections all go in the same direction).
Can those tensor cores be used for similar fashion like AMD's Rapid Packed Math? Seems like it is waste of potential if Tensor cores are just being used for denoising purposes.