Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
I don't see it at all right now under the $400 mark. Embrace that! :rolleyes:
You will, even without RT-specific hardware.

After seeing RDR2, especially on XB1-X, we can survive another generation without RT. However, if Cyberpunk 2077 PC version totally massacre's the next-generation console editions with gobs of gorgeous RT lighting, shading, and reflections... then I will find a local wormhole, hop in, and beat myself as he, I, are writing this post. :yep2:
If not RT then we at least get SVOGI and contact hardening shadows as standard features. Though, considering the growing support of RT in middleware engines...

Have you a link to these engines being pushed on GTX 1080 hardware? I can only find examples running on mid-range and older cards (4 TF).
The fact no-one's got a demo of them doesn't show the quality is inferior. The existing demos show great quality, and show the quality can be ramped up versus performance. Ergo, until we see a demo with a large rectangular light source and vertical rails or similar comparable set-up on a GTX 1080 to compare the shadowing versus the RT example, nothing is proven.

If your statement is fact, please back it with supporting evidence. If it's just your opinion, please qualify it as such with a prefix like, "I expect..." or "I would assume..."

That's a totally independent feature to upscaling. You can do that whether you upscale or not. When it comes to turning 1080p pixels to 4K pixels, reducing the number of pixel samples needing to be drawn, all rendering methods can use ML based solutions or algorithmic solutions.

Again, we're trying to have an actual discussion here. If all you want to do is blow RT's trumpet and say its better at everything, you're just generating noise.
1) You can look at Kingdom Deliverance benchmarks for a real world example.

2) I think it does. If it's so great, why does nobody use it (except for CryEngine, but then again pretty much nobody uses CryEngine)? At the beginning of the gen there was some excitement about it and then nothing.

If you think voxel cone tracing is comparable to RTRT, shows some data to prove it.

3) Getting better IQ on the 1080p image means the upscaled results will be better as well. Better input, better output.

4) The growing industry support of RT is an important factor of the discussion.

I think all of us have not appreciated how far some lighting solutions have come. The VXGI stuff is several years old, DX11 based, but includes soft specular reflections which can't do RT's perfect mirroring but overall, the solution provides realtime GI with ambient occlusion, soft shadowing, and soft reflections. Yet no-one here knew about it. ;) A game designed for XB1X using this technique would be spectacular, and if it weren't for the inclusion of RTX in nVidia's latest pro-focussed GPUs, we'd be talking about different solutions with a unified view on their occlusion and game engines' short-term future. We'd be looking at BFV showing cone-traced specular highlights that runs on all GPUs instead of RTX-specific ray-traced reflections.

This is where console tech wants to be versatile, to enable acceleration of different solutions. If the RT hardware can be used to accelerate volume traversal and perform alternative shape tracing, it's inclusion is more valuable.
Versatile but inferior solutions. You yourself admitted right there that the quality is not comparable to RTRT. Would people be as excited if the BFV reflections were all blurry? Also I remind you that DXR can run on compute. Also, did we really see any breakthroughs in terms of rendering algorithms this gen? Things got better but nothing fundamentally different from what we started with for AAA games.
 
I don't think anyone has discounted today's lighting solutions. But they are either baked, or have a static form of GI. There are very few games that have global dynamic GI and those that do, still have hard limitations.
That's not true of VXGI. the demo's I've linked to, dating back 3 years, is fully dynamic.

the idea that all Voxel based GI is equal in quality or performance is false.
No-one said that. In quality, it's inferior. However, in performance it's superior. How can it not be when a 4TF can run a dynamic GI solution in realtime but not a raytraced solution in realtime?

LPVs GI worked on an xbox one with Fable Legends. The one in CryTek only works for 1 large source. That's hardly the same as multiple light sources in a scene all of them doing GI everywhere.
Have you not looked at the demos I linked to? They show multiple light sources with specular highlights traced in realtime, all casting shadows.

From 11 minutes you see it being updated in realtime with dynamic objects, on the old version. nVidia haven't updated with the latest VXGI (perhaps because they have something else they want to be talking about? ;)...
http://on-demand.gputechconf.com/gtc/2015/video/S5670.html

Again, I don't think any of us realised this was possible, and our understanding about realtime global lighting is lacking. There's a decent question, "if this is possible, why aren't games using it?" but the obvious answer there is, "it's too demanding for current lowest-common-denominator game targets." If you need 4TFs minimum to use VXGI well and your game is targeting <2 TF consoles, you aren't going to look at this option, which means very little this generation would be using it. Just as we're not looking at games running on consoles to see what raytracing can do, we shouldn't be looking at games running on consoles to see what voxelised lighting and other solutions can do. We should be looking at the state of the art and the potential.

I'm pretty sure if RT didn't exist in nVidia's, we'd be having a far fairer discussion about the possibilities of GI solutions. ;)
 
2) I think it does. If it's so great, why does nobody use it
If raytracing's so great, why doesn't everyone use it? Because the current GPUs aren't powerful enough, same answer for both solutions. As I say to Iroboto above, who is going to look into using realtime GI solutions that require 4 TFs minimum if your base target spec is <2 TF consoles? Makes no sense. It's a tech that's not suited to current gen consoles. So just as we don't look at current gen games and say, "if raytracing is so great, why aren't any games using it?" we shouldn't look at current-gen games and question the value of VXGI or other next-gen solutions.

If you think voxel cone tracing is comparable to RTRT, shows some data to prove it.
I've linked to several realtime videos running on 4 TF GPUs that show multiple area light sources with shadows and specular highlights.

Versatile but inferior solutions. You yourself admitted right there that the quality is not comparable to RTRT. Would people be as excited if the BFV reflections were all blurry? Also I remind you that DXR can run on compute.
But not fast enough to be useful, whereas voxelised solutions can. A game that uses DXR traced reflections will look its best on an RTX 2070 or above, but fail on lower tier and older hardware with no reflections. A game using VXGI won't look as good on the RTX GPUs, but will look far better on the 1060s and above in this world.

Also, did we really see any breakthroughs in terms of rendering algorithms this gen? Things got better but nothing fundamentally different from what we started with for AAA games.
What do you count as a breakthrough? Not image reconstruction? Advanced temporal AA solutions rendering aliasing almost a thing of the past? What were the breakthrough rendering algorithms of the PS2 and PS3 generations?
 
That's not true of VXGI. the demo's I've linked to, dating back 3 years, is fully dynamic.

No-one said that. In quality, it's inferior. However, in performance it's superior. How can it not be when a 4TF can run a dynamic GI solution in realtime but not a raytraced solution in realtime?

Have you not looked at the demos I linked to? They show multiple light sources with specular highlights traced in realtime, all casting shadows.

From 11 minutes you see it being updated in realtime with dynamic objects, on the old version. nVidia haven't updated with the latest VXGI (perhaps because they have something else they want to be talking about? ;)...
http://on-demand.gputechconf.com/gtc/2015/video/S5670.html

Again, I don't think any of us realised this was possible, and our understanding about realtime global lighting is lacking. There's a decent question, "if this is possible, why aren't games using it?" but the obvious answer there is, "it's too demanding for current lowest-common-denominator game targets." If you need 4TFs minimum to use VXGI well and your game is targeting <2 TF consoles, you aren't going to look at this option, which means very little this generation would be using it. Just as we're not looking at games running on consoles to see what raytracing can do, we shouldn't be looking at games running on consoles to see what voxelised lighting and other solutions can do. We should be looking at the state of the art and the potential.

I'm pretty sure if RT didn't exist in nVidia's, we'd be having a far fairer discussion about the possibilities of GI solutions. ;)
I've ignored them because they aren't recent enough. As in I wanted to give you the benefit of the doubt and find something even newer to really showcase the efficiency of modern VXGI.

My post here:
https://forum.beyond3d.com/posts/2046655/

The edit at the bottom are using VXGI 2.0. Released this year in April of 2018. That's the performance of VXGI with the latest and greatest hardware features prior to RTX.
The cards they are using are all way above 4 TF of power.

Watching the performance of the VXGI 2.0 demos as well as the HFTS+ demos (all exclusive to nvidia) made me realize there's just not enough juice available even at the 12+ TF range, which is where most people are expecting our next gen consoles to land.
 
I couldn't find anything newer. That example is interesting as the framerate doesn't appear to have scaled well.
we'll keep looking!
article on vxgi 2.0
https://wccftech.com/nvidia-releasing-vxgi-2-0-better-perf/

VXGI 2.0 is packed with several more enhancements, such as one-pass voxelization (which according to Panteleev can almost double performance in most cases), support for custom G-Buffer layouts and View Reprojection for VR games, simpler voxel formats, simpler and more flexible materials, simpler tracing controls, improved upscaling and temporal filters and several additional fixes.
 
If raytracing's so great, why doesn't everyone use it? Because the current GPUs aren't powerful enough, same answer for both solutions. As I say to Iroboto above, who is going to look into using realtime GI solutions that require 4 TFs minimum if your base target spec is <2 TF consoles? Makes no sense. It's a tech that's not suited to current gen consoles. So just as we don't look at current gen games and say, "if raytracing is so great, why aren't any games using it?" we shouldn't look at current-gen games and question the value of VXGI or other next-gen solutions.

I've linked to several realtime videos running on 4 TF GPUs that show multiple area light sources with shadows and specular highlights.

But not fast enough to be useful, whereas voxelised solutions can. A game that uses DXR traced reflections will look its best on an RTX 2070 or above, but fail on lower tier and older hardware with no reflections. A game using VXGI won't look as good on the RTX GPUs, but will look far better on the 1060s and above in this world.

What do you count as a breakthrough? Not image reconstruction? Advanced temporal AA solutions rendering aliasing almost a thing of the past? What were the breakthrough rendering algorithms of the PS2 and PS3 generations?
1) Voxel cone tracing was introduced since before the generation started. Since then, how much support has that technique received, whether on consoles or on PC? RTRT was introduced this year and the support is already better than VCT ever had.

2) And the quality is no good compared to RTRT demos.

3) Simple, just lower the quality of RT to the level of VCT. Performance is now way better.

4) One of your arguments for more compute instead of RT-specific hardware is that late in the gen, new, better algorithms would show up that would be better suited for compute instead of RT hardware, making it obsolete and crippling the hardware. When has that sort of mid-gen paradigm shift happened before? Certainly not this gen.
 
Looking at this, and looking back, NVIDIA seems to have the biggest experience in this field, from HBAO+ to PCSS to PCSS+ to HFTS to VXGI to VXAO to VXGI 2.0, NVIDIA keeps trying and trying, they finally landed with RTX. And they chose hardware acceleration for it, I am quite sure If NVIDIA found a compute way to accelerate ray tracing they would've probably done it through CUDA, or whatever similar (like they did with several effects in the past). But they didn't, which means they probably found no other way than BVH acceleration to achieve real time ray tracing at an acceptable speed.
 
Looking at this, and looking back, NVIDIA seems to have the biggest experience in this field, from HBAO+ to PCSS to PCSS+ to HFTS to VXGI to VXAO to VXGI 2.0, NVIDIA keeps trying and trying, they finally landed with RTX. And they chose hardware acceleration for it, I am quite sure If NVIDIA found a compute way to accelerate ray tracing they would've probably done it through CUDA, or whatever similar (like they did with several effects in the past). But they didn't, which means they probably found no other way than BVH acceleration to achieve real time ray tracing at an acceptable speed.
i imagine there are probably several teams at play here.
1 that works on the RT
1 that works on ML
1 that works on Gaming graphics needs.

And so we'll likely still see an evolution of lighting solutions on the rasterization side even though RTX is released. The gaming graphics teams will need to work on something whether, there's still a lot to be done if the wish lists from dev are anything to go by.
 
Well DXR is one thing and RTX is another. For DXR you can play with it on its own but for RTX you need to get your hands dirty.

DXR is essentially based on NVidia's implementation, though they also provide proprietary GameWorks libraries. Vulkan/OpenGL and OptiX support is either not functional yet or not relevant for gaming or game consoles.

And event hough DXR may be based on existing core DXGI/Direct3D data formats, API conventios and HLSL definitions, it nevertheless encompasses an entirely new ‘raytracing’ pipeline - which has different enough setup requirements comparing to existing rasterization and compute pipelines.

I still think that the point of RTX is to get developers on board with ray tracing, that's why its designed for speed and ease of adoption
RTX is just a marketing name that ties their latest video cards with some arbitrary set of software. We've seen quite a few these names in the recent decades.
 
The idea of a generalized, non-fixed function based RT sounds a lot better than what Nvidia is doing. But once again, I'm not exactly sure what that means entirely, I also don't know if that's a limitation of DXR.

Not a limitation, just a design decision that aims for maximum performance.

Raytracing is essentially a search for ray intersections, and this is analogous to triangle rasterization in the traditional pipeline.

Both could be improved with advances in hardware capabilities - such as hierarchial tiled Z-buffers, Z-buffer compression and resource-view-ordered rasterization, and for raytracing, faster search using more efficient bounding volume hierarchies, shader indexing with bigger/wider resource descriptors/tables, or additional entry points to run color or geometry shaders
(see sections '6. System limits and fixed function behaviors', '13. Potential future features' of the RS5 DXR API documentation published in this thread: http://forums.directxtech.com/index.php?topic=5985.0)

These are proprietary, vendor-specific implementations that do not need explicit developer control, so making them programmable would reduce execution speed comparing to a fixed function implementation, for no apparent benefit. There are programmable shaders which belong where it would be most efficient.

we do need to start somewhere, and I'd rather start with basic RT features (before the mature ones) now, then to not have it at all next gen, and just go yet another generation of rasterization.

DXR already has one single important and mature feature of the raytracing – which is recursion. Visual quality is only limited by the number of rays you are tracing per each pixel. Of course multiple rays would have enormous performance requirements for real-time rendering, so this would be initially reserved for reflective/transparent surfaces and ambient lighting, and will still require denoising.

'DirectX Raytracing (DXR) Functional Spec v1.0' cited above lists a few potential new features on the roadmap - such as traversal shaders, more efficient acceleration structures, beams, and ExecuteIndirect() improvements.

in a discussion of purely features, and features only, the next gen console whether you choose to adopt RT or not, would have the same features. Added RT and Tensor cores are just more features

DXR looks like a fundamental paradigm shift rather than just a new feature. It requires the developer to feed all geometry data, textures, and shaders to the raytracing pipeline, using a bindless direct memory access resource model, and to pre-build a BVH tree containing implementation-specific search acceleration structures for each frame rendered - essentially a proprietary hardware-optimized scene graph. So there is a whole lot of new computationally and memory-access intensive operations, all running in parallel to legacy rasterization and compute pipelines.

There’s a big difference between power and feature set. Consoles will likely stay in the middle of the power band, as their costs are directly associated with SoC size and TDP.
Feature set is something else entirely though, there are Intel IGPUs that support DX12 features than some of the latest AMD releases.
Exactly!
 
Last edited:
DXR is essentially based on NVidia's implementation, though they also provide proprietary GameWorks libraries. Vulkan/OpenGL and OptiX support is either not functional yet or not relevant for gaming or game consoles.

And event hough DXR may be based on existing core DXGI/Direct3D data formats, API conventios and HLSL definitions, it nevertheless encompasses an entirely new ‘raytracing’ pipeline - which has different enough setup requirements comparing to existing rasterization and compute pipelines.

RTX is just a marketing name that ties their latest video cards with some arbitrary set of software. We've seen quite a few these names in the recent decades.
1) It's very early still. Next year's GDC will be very interesting.

2) Well, everything is marketing names with NVIDIA. "CUDA cores" for example.
 
Anyone here know if this Sony DevCon event in London is something or nothing? Seems super hush, hush and nobody seems to have heard anything about it: https://devcon2018.london/
I know Sony has their own event for first party devs that doesn't get much shared with the public. Maybe that's one of them? I know one such meetings is near GDC, but maybe they have more than one per year. I don't know.
 
everything is marketing names with NVIDIA
'RTX' just feels ambiguous to me - if you talk about Nvidia raytracing hardware, it's rather 'GeForce RTX', and if you mean software, it's just 'DirectX Raytracing' (DXR). 'RTX' alone is not really useful, like any marketing term.
 
I know Sony has their own event for first party devs that doesn't get much shared with the public. Maybe that's one of them? I know one such meetings is near GDC, but maybe they have more than one per year. I don't know.

Makes sense. I'm just curious why this site was set up when Sony already have DevNet they could have used? They even refer late registrants to DevNet to get a place. There isn't much to be found about these events (there is one pic for DevCon London 2017) but the ones conducted on the West Coast of America are most talked about/reported on and all seem to happen in May too.

Could be a fake site I suppose but doubt it.
 
Makes sense. I'm just curious why this site was set up when Sony already have DevNet they could have used? They even refer late registrants to DevNet to get a place. There isn't much to be found about these events (there is one pic for DevCon London 2017) but the ones conducted on the West Coast of America are most talked about/reported on and all seem to happen in May too.

Could be a fake site I suppose but doubt it.

I guess that an external site will make info more accessible for attendees.

Do you know when it was happening?
 
I guess that an external site will make info more accessible for attendees.

Do you know when it was happening?

No idea when it did or will happen. I was hoping devs here could confirm the events existence and shed some light on whether this is just a regular annual event for PlayStation or something specific for next-gen i.e. PS5.
 
No idea when it did or will happen. I was hoping devs here could confirm the events existence and shed some light on whether this is just a regular annual event for PlayStation or something specific for next-gen i.e. PS5.

The name of the event may be the answer: DevCon 2018
 
Status
Not open for further replies.
Back
Top