How would reflections be free?
Disable shading (diffuse only) and get reflections for free (see Quake RTX path tracing demo).How would reflections be free?
If AMD half asses RT then they will surrender both the IQ AND performance advantage to NVIDIA.
Do we know what sort of computational power and memory resources are needed to do RTRT? I ask because Sony worked at async compute for PS4, if (and it's a big if) RTRT could be carved out of the compute budget sans fixed hardware, we could see the consoles opt to hit performance above that metric and allow developers to choose which features to implement. RTRT and upscale 1080p at 30 fps or 4k and 60 fps or some other optimum solution (many more fps) for VR for example.You don't know that with next-gen compute. I guess it all depends on what people mean by 'ray tracing hardware'. It is a pretty silly argument over whether console have a hardware feature or not when no-one can even say what exactly that hardware is.
Tracing one sized cubes must be incredibly fast as it's basically the classic Wolfenstain3D raycasting DDA with third dimension.Is 70fps in Minecraft at 1440p top-notch performance? It is 2 bounces, vs 1 for Metro. It is just a demo, vs a production implementation, so I'm not sure it would really be fair to compare performance anyway. It does look very nice, and I do like his idea of storing the average colour for a particular texture for GI purposes. Instead of actually sampling a point on a texture for colour information, he can just use the same average sample for the whole texture. There must be some performance improvements there, and it actually might lead to a less noisy and "better" look when you're dealing with 1 or less than 1 ray per pixel.
Edit: It's a GTX1080, very simple simple map even by minecraft standards. Would be interesting to see how the BVH affects performance as the world gets bigger. Geometric complexity here is pretty simple, but the GTX1080 isn't exactly a ray tracing gpu, even by raw compute standards.
Also, Minecraft is written in Java ... wtf ... so I imagine there's some lost performance there.
We don't know. I've never seen a serious approach to do RT in games yet. Things like Optix or Radeon Rays do not tell us anything, and lowpoly Quake or Minecraft neither.Do we know what sort of computational power and memory resources are needed to do RTRT?
This is why I favor putting resources in the hands of development and letting teams determine how to allocate the compute and bandwidth. These new memory speeds create additional possibilities too, the relatively slow memory we've had till now could be the equivalent of putting a quarter in a vending machine when we need a dollar to get anything back out.We don't know. I've never seen a serious approach to do RT in games yet. Things like Optix or Radeon Rays do not tell us anything, and lowpoly Quake or Minecraft neither.
Personally i do RT in compute for GI, and from this i'm sure it's doable, but i use a hierarchy of discs instead triangles, also for diffuse GI LOD can be used very aggressively.
I'm very optimistic and will try to use this for sharp specular reflectons as well ASAP. Whatever performance i get, the same would be possible on next gen consoles using triangles too because with fp16 the instruction count and register usage would be almost the same.
I think in the end it is just a question of detail. If next gen has no RT hardware, specular reflections might end up less sharp than with RTX, or shadows will be less exact, but for GI it's no visible difference.
No RT hardware could even result in better results because devs are forced to solve the problems with better algorithms instead with raw but restricted brute force power.
Additionally, if Volta can do RT fast enough, then next gen can do it too. And doing RT at half res is enough anyways.
Who knows - i guess next gen games using RT are in the works, and we might see some things at next gen announcement that ends up much more impressive than anything we've seen on PC so far. But even if not, it's just a matter of time.
Only in select visual fields. Like shadows (VXAO, HFTS), and some AA or WaveWorks effects. RT changes that by introducing a massive sweeping change, that involves the rest of the visual spectrum: reflections, lighting, materials, more shadows .. etc.They surrendered that to NV many years ago. RT doesn't do anything to change that.
I disagree, PC sales are now equal or sometimes surpassing XO sales. They've become an integral part of the overall game sales. Now EVERY game is released on PC except 1st party Sony exclusives. Because the porting process is easier now, and the sales are good.Because the vast majority of games aren't sold on PC, they are sold on consoles.
That's not entirely true either, most AAA titles now have much higher fidelity graphics than even the OneX, just look at how Metro and Battlefield V on PC compares to One X. The One X usually lacks SSR, Tessellation, Draw Distance, a myraid of particle effects .. etc.As a PC user, if we're lucky we'll get some little graphical addons for PC that generally aren't terribly efficient and don't dramatically change how games look.
I urge you to watch DF analysis on the tech, might change your mind there.but for the most part it doesn't look significantly far ahead of any other game you can play on PC.
Of course not. Every game will run, no one is doubting that. However AMD sales on PC will suffer because of that, they will be rendered as the lower performing, lower IQ option, this will degrade their presence on PC even further. That's not good for AMD, or even the industry. This will also further impact their presence on consoles on subsequent cycles.NV will maintain their lead in the PC space in that case, but no game made will fail to run on AMD hardware.
RT doesn't do the rendering here. It is a hybrid solution and the lightning conditions and models are more or less optimized for normal rasterization techniques. This can result in a very different scene.Not sure if this is noticed yet but RTX is doing a piss poor job on human skins sometimes, it looks dramatically worse than rasterization here.
It's the effect of indirect RT AO. Contact shadows increase a lot more. You need to watch this in motion not in screenshots. Skin is rendering normally with RT outside of shadowed areas.Not sure if this is noticed yet but RTX is doing a piss poor job on human skins sometimes, it looks dramatically worse than rasterization here.
Even in motion the old dude especially just looks wrong and overly darkened at times. But yeah outside of the shadowed area is quite fine.It's the effect of indirect RT AO. Contact shadows increase a lot more. You need to watch this in motion not in screenshots. Skin is rendering normally with RT outside of shadowed areas.
Am I very strange if I think that maybe the issue is the hair (it looks like it's lit with ambient light, value 1) and not the skin? And I don't see where's the WTF in the girl.Even in motion the old dude especially just looks wrong and overly darkened at times. But yeah outside of the shadowed area is quite fine.
It seems they do RT AO, which cancels out lighting from probes so the faces become dark, and after that they render hair without AO.Am I very strange if I think that maybe the issue is the hair
Additionally, if Volta can do RT fast enough, then next gen can do it too.
Volta is a 2017 GPU that wasn't designed with raytracing in mind. The fact that it can still raytrace effectively shows that there's potential for compute to be used for raytracing, and it may well be possible to just augment compute to make it that little bit more effective. That's exactly how compute came to be. People found they could use pixel and vertex shaders to run non-graphics workloads. The GPU was then modified to enable these workloads more effectively. People are currently finding they can raytrace on compute...Volta being much slower then even a 2060, you better hope the console packs more power then thar 3000 dollar compute monster, it has much more compute power then even a 2080Ti.