Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Why? Ray tracing has many uses beyond photorealistic graphics where photorealism isn't attainable in the next hardware.

Because of the sheer power needed to produce a ray traced image. I believe 3D graphics is all about creating a series of images that use the least amount of computation for maximum effect, be it to achieve realism or for arts sake.
It’s always been that way and will be for a while longer I think.
Once you have to calculate the way light reacts in real time well that needs a huge amount of computational power.
I think we may see it at some point used initially in extremely small doses.

Am I missing something?
 
Because of the sheer power needed to produce a ray traced image. I believe 3D graphics is all about creating a series of images that use the least amount of computation for maximum effect, be it to achieve realism or for arts sake.
It’s always been that way and will be for a while longer I think.
Once you have to calculate the way light reacts in real time well that needs a huge amount of computational power.
I think we may see it at some point used initially in extremely small doses.

Am I missing something?
It’s hybrid rasterization and ray tracing together. Developers can determine how much RT they want to use to create the image and where they want to take advantage of it. It can be used to calculate some of the items that we spend tons of cycles on and still don’t get an accurate result. I suppose what's at debate is how much hardware they need to add to support the features.
 
I though Nvidia hardware ray tracing support was for an out of game scenario. Eg cheaper and more efficient computer farms for ray tracing scenes.
 
I though Nvidia hardware ray tracing support was for an out of game scenario. Eg cheaper and more efficient computer farms for ray tracing scenes.
not necessarily. Just the super duper cases I can see it that way.

This should be released later this year in 2018 I think. Not sure if it will ship with these DXR add-on's, I believe it will.
The developers wanted to explore the practical application of this tech in their proprietary 4A Engine. In the provided video demonstration they show RTX implemented and running in Metro Exodus. They have utilized true raytracing to render both Ambient Occlusion and Indirect Lighting in full realtime, in a practical in-game scenario.

Dynamic lighting has always been a priority for the Metro series. We intentionally avoided pre-baked data and were relying on real-time methods to build our visuals to support more flexible, dynamic gameplay. Global Illumination is highly important for the proper “grounding” of objects and readability of shapes, which in turn create improved ease of navigation and enhanced visual impact for the player.

Previously, we had utilized a mix of several custom-made systems to satisfy our hungry demand for dynamic content of varying scale. Now we are able to replace it with one single system that covers all our needs and outputs the quality of offline renderers.

4A Games’ Chief Technical Officer, Oleksandr Shyshkovtsov.

So I don't think Tensor cores are required for this. So that's what I mean by being in this unknown state of varying levels of support for DXR. I think as more information is released we'll know more. But it does seem to be a situation where consoles can probably support some of this without needing things like Tensor Cores. Deciding what the baseline of hardware support will be, is going to be more important than shipping with a lot of power in this case.

One would have to weigh out having more GPU and CPU power from the get go vs cutting back on GPU and CPU and inserting more things like Tensor cores to move the baseline of hardware support up. Then possibly rely on a mid-gen refresh to bring the power in. All speculation of course, I've not a clue what hardware is required for support DXR and likely it will change over the next 3 years.

MS has set themselves up for a GPU Side Dispatch draw calls all the way from OG xbox one and likely through to next gen, if leveraged as a baseline it will play a large part in freeing CPU cycles. They could cut back on the size of Zen and their GPU to make space for things like tensor cores in their next box if they so desire I think.
 
Last edited:
I think the best moment to launch could be as soon as AMD's true next gen architecture is available (maybe with features developped in collaboration). Then the next mid-gen can be using that new architecture too otherwise they paint themselves in a corner.

So launching a year earlier than that is a big missed opportunity for the power/performance improvement of the new architecture, and launching a year later is a potential zero sum. More power would cost more money, since there is no new architecture providing better cost effectiveness. The additional power has to come from more silicon and higher clock (lower yield and/or expensive cooling).

That would be as far away as 2021 :runaway:
On the bright side, it would be a nice tech jump.

If anything wouldn't the way to do it would be to launch next gen consoles on 7nm with Navi arch or features...then wait to use AMD true next gen architecture for the mid-gen consoles?
 
Cerny talked about raytracing hardware in 2011, and he said it wouldn't be used in the ps4 even if it was available on time. Because developers wanted traditional hardware, they didn't want fancy hardware not available elsewhere.

But this time it's not actually ray tracing hardware, it looks like a more specialized form of compute. The much higher performance of tensor cores is beginning to allow RT, or at least hybrid, which wasn't enough until now. And for amd/nvidia, if there's API support on PC and consoles, be it from DX, or Vulcan, or GNM, we can expect games to use it everywhere. It should be even easier on consoles with the fixed hardware features.

I wonder how many rays per TF the tensor cores can do. It can't be enough for pure ray tracing engines.

In VFX we need hundreds or thousands of rays per pixel, it's pure brute force RT with surprisingly simple math. All of this was hybrid rasterizers 15-20 years ago. GPGPU solutions like redshift are gaining ground fast.
 
Last edited:
Ray tracing on the GPU is not a new thing. RTX didnt introduce any specific hardware feature. It just introduced a high level API inside DirectX to abstract away ray tracing for devs, so they don't have to code their ray tracer themselves, and so gpu designers can potentially build hardware to accelerate it. OpenRT did the same thing for openGL almost 10 years ago. It just wasn't pushed as hard like it was by MS this year. Nvidia said their new pro gpus have features tailored for it, but gave no details on it. The RTX implementation itself is still a black box for most people. Most dedicated devs would probably not be too happy to work with such opaque black-box in actual games, and many might end up creating their own custom implementation on compute. Still, if they really want to, they can do it on ps4 or xbone right now. As far as we know right now, this whole hardware accelerated RTX thing could be no more than hot air. I wouldn't hold my breath to much for it.
 
Because of the sheer power needed to produce a ray traced image. I believe 3D graphics is all about creating a series of images that use the least amount of computation for maximum effect, be it to achieve realism or for arts sake.
It’s always been that way and will be for a while longer I think.
Once you have to calculate the way light reacts in real time well that needs a huge amount of computational power.
I think we may see it at some point used initially in extremely small doses.

Am I missing something?
1) The latest realtime demos.
2) That you can use hybrid solutions and layer raytraced content on top of rasterised
3) That ray-tracing hardware can be used for anything that requires tracing rays, such as audio rays and AI rays (as long as its flexible enough).
 
Ray tracing on the GPU is not a new thing. RTX didnt introduce any specific hardware feature. It just introduced a high level API inside DirectX to abstract away ray tracing for devs, so they don't have to code their ray tracer themselves, and so gpu designers can potentially build hardware to accelerate it. OpenRT did the same thing for openGL almost 10 years ago. It just wasn't pushed as hard like it was by MS this year. Nvidia said their new pro gpus have features tailored for it, but gave no details on it. The RTX implementation itself is still a black box for most people. Most dedicated devs would probably not be too happy to work with such opaque black-box in actual games, and many might end up creating their own custom implementation on compute. Still, if they really want to, they can do it on ps4 or xbone right now. As far as we know right now, this whole hardware accelerated RTX thing could be no more than hot air. I wouldn't hold my breath to much for it.
Some corrections: RTX is nvidia, DXR is the API made by MS.

So, yes. It's not new, and we get that. Lets not confuse coding through compute, and coding RT using ray tracing function delivered by the API. One is handled by the developer who has limited control on how the data is moved in the hardware, and the other is handled by the manufacturers, who upon reading the API call, will move data in a more efficient way to produce the result.

So lets assume nothing changes, every engine supports DX12 and nothing would really need to change. With DXR, they can specify the flag to incorporate Ray Tracing, branching the code towards ray tracing (if the flag passed) and not say the rasterization method of AO or indirect lighting, or you could just code DXR and the API knows to leverage the hardware accelerated version if present, and if not, use the compute version. And all of a sudden you can support all the way from OG Xbox One all the way to this new Xbox that could contain hardware accelerated features for DXR. The concept that we're discussing here, is really about the easy of bolting this ray tracing on top of an existing engine without needing to impact everything just to bring these features in. And then of course there are API calls within DXR that developers can leverage to their advantage without needing to think about implementation as much since that part is going to be handled by AMD/Nvidia.

This makes for an interesting discussion of what's possible in the future, and what could be back ported to current gen today.
 
Last edited:
I stopped caring about API discussions after the following sequence of events:
- A console's low level API should logically allow more efficient code
- Supposedly not true
- But aren't high level APIs more rigid since they are multi-vendor HAL?
- again supposedly not the case
...
- DX12 and Vulcan are revolutionary and double the frame rate

I'm out. I don't understand what an API is. :LOL:
 
I think PS5 might be released in late 2020, no sooner.

While I agree it would be nice if PS5 could use Next Gen AMD GPU, instead of Navi, I think there's a decent chance PS5 GPU will be closer to Next Gen GPU than you might think.

Look back to 2004 and 2005. The Xbox 360 specs leaked in April 2004, and released in late 2005 with a GPU (Xenos) that was closer in architecture to what ATI would have for PC in May 2007 (R600 / Radeon HD 2900), both using Unified Shader Architecture than what ATI had for PCs in late 2005 (R520 / Radeon X1800).

Moving back to recent times, AMD released the Polaris 10 / RX 400 series in mid 2016. The PS4 Pro came out later that Fall, and its GPU based mostly on Polaris and partly on Vega, even though Vega itself was 9-10 months away from reaching PC gamers.

Now looking ahead, I think PS5 and Xbox Next will not be released before Holiday / late 2020. Forget about 2019. Think late 2020 for PS5, and late 2020 at the earliest for the next Xbox, if not 2021.

Like some of you, I think it would be really stupid for next gen consoles to miss out completely on AMD's Next Gen GPU technology by sticking
strictly to Navi and thus, the end of the line for Graphics Core Next architecture. The worst, PS5 and Xbox Next GPUs should incorporate many of the elements of the next gen GPU architecture, even if the PC doesn't see it in full until 2021 - Much like November 2005 Xbox 360 Xenos GPU ----> May 2007 ATI PC.
 
I stopped caring about API discussions after the following sequence of events:
- A console's low level API should logically allow more efficient code
- Supposedly not true
- But aren't high level APIs more rigid since they are multi-vendor HAL?
- again supposedly not the case
...
- DX12 and Vulcan are revolutionary and double the frame rate

I'm out. I don't understand what an API is. :LOL:

Guessing when ps3 and 360 came out coding generally became more high level because of DX?
 
Guessing when ps3 and 360 came out coding generally became more high level because of DX?
no ;)
he's referring to the marketing of things. His criticisms are valid, the marketing was designed to push adoption and a lot of stuff was thrown into their to support it. The outputs were different from what were advertised (but slowly getting there), and that's just the result of the game industry taking the steps to make changes to their newer titles and not their work in progress.
Coding on consoles have always been low level.

APIs are just interfaces to assist developers into doing things they don't need to worry about. The higher the level it is, the less you need to worry about, the lower level it is, the more you need to worry about.

The misnomer is that because you have a low level API you are 'close to the metal', and there seems to be confusion on shaders (the actual graphics programming) and low level API. The API controls the GPU and provides the constructs and data structures. The shaders are programs run by the GPU. Thus someone trying to create Ray Tracing through compute shaders doesn't have the ability to change the way the compute queue runs the program or use special data structures that would assist in the speed of a ray tracing program at the GPU level. Where an API designed for Ray Tracing would create those data structures and have a different method of moving or assigning data to the compute queue for processing. People tend to confuse this with being able to program the GPU, as in, being able to program it to do whatever you want.

tldr; ray tracing through compute shaders, is likely (barring poor implementation from vendors) to be less efficient than ray tracing through the API. Even if the hardware is the same. (better to ask a developer on this one, you lose the general purpose flexibility in favour for some speed enhancements, a trade-off like others)

Consider PC games. How nvidia and AMD releases drivers to support better performance of these games. the DX API didn't change, they just looked at the game code and found ways to change how the driver reacts to the API call to speed things up. (my understanding)
 
Last edited:
Consider PC games. How nvidia and AMD releases drivers to support better performance of these games. the DX API didn't change, they just looked at the game code and found ways to change how the driver reacts to the API call to speed things up. (my understanding)

Shader replacement ?!

Say it ain't sooooo
 
no ;)
he's referring to the marketing of things. His criticisms are valid, the marketing was designed to push adoption and a lot of stuff was thrown into their to support it. The outputs were different from what were advertised (but slowly getting there), and that's just the result of the game industry taking the steps to make changes to their newer titles and not their work in progress.
Coding on consoles have always been low level.

APIs are just interfaces to assist developers into doing things they don't need to worry about. The higher the level it is, the less you need to worry about, the lower level it is, the more you need to worry about.

The misnomer is that because you have a low level API you are 'close to the metal', and there seems to be confusion on shaders (the actual graphics programming) and low level API. The API controls the GPU and provides the constructs and data structures. The shaders are programs run by the GPU. Thus someone trying to create Ray Tracing through compute shaders doesn't have the ability to change the way the compute queue runs the program or use special data structures that would assist in the speed of a ray tracing program at the GPU level. Where an API designed for Ray Tracing would create those data structures and have a different method of moving or assigning data to the compute queue for processing. People tend to confuse this with being able to program the GPU, as in, being able to program it to do whatever you want.

tldr; ray tracing through compute shaders, is likely (barring poor implementation from vendors) to be less efficient than ray tracing through the API. Even if the hardware is the same. (better to ask a developer on this one, you lose the general purpose flexibility in favour for some speed enhancements, a trade-off like others)

Consider PC games. How nvidia and AMD releases drivers to support better performance of these games. the DX API didn't change, they just looked at the game code and found ways to change how the driver reacts to the API call to speed things up. (my understanding)


Some dev will prefer to have their own datastructure and want to do raytracing with a different method and don't like to work with a black box..

http://aras-p.info/blog/2018/03/21/Random-Thoughts-on-Raytracing/

And it is not the only rendering developer I was not very happy with the black box API...

Edit: Claybook is using raytracing without RTX...
 
Last edited:
Back to hardware, how much does RT hardware support require a compromise elsewhere? I was thinking the more generalized the hardware is, the less it compromises die area since it will see continuous usage with other tasks.
 
Back to hardware, how much does RT hardware support require a compromise elsewhere? I was thinking the more generalized the hardware is, the less it compromises die area since it will see continuous usage with other tasks.
if it requires tensor cores then it's going to take up some space. Not sure how much space.
Looking at Titan V specs

CUDA Cores 5,120
Single precision performance 15TF

640 Tensor Cores
Tensor Performance 110TF

Memory Bus Width 3072-bit

Die Size: 815mm2
Transistor Count
21.1B
TSMC 12nm FFN

Xbox One X at 16nm FF is
Transistor Count 7B @ 359 mm² die
 
Last edited:
Some dev will prefer to have their own datastructure and want to do raytracing with a different method and don't like to work with a black box..

http://aras-p.info/blog/2018/03/21/Random-Thoughts-on-Raytracing/

And it is not the only rendering developer I was not very happy with the black box API...

Edit: Claybook is using raytracing without RTX...
That's fine, developers aren't being forced to use DXR. Just like they aren't forced to use Tiled Resources, or Sony's checker boarding techniques. But that doesn't mean it's a terrible thing to see that baseline move up and have it available for everyone to use. The most important thing is finding the right tool for your problem, if DXR doesn't satisfy it, then you're going to need to go through a custom method.
 
Status
Not open for further replies.
Back
Top