Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
It seems to be more flexible than people are making it out to be.
Correction - "It seems to be more flexible that people are aware." There's no agenda in this discussion (other than a subsection of the pro-RT camp who'll just trumpet RT one-sidedly) and those questioning the value of RTX aren't just trying to downplay it in some anti-nVidia agenda or somesuch.
 
Correction - "It seems to be more flexible that people are aware." There's no agenda in this discussion (other than a subsection of the pro-RT camp who'll just trumpet RT one-sidedly) and those questioning the value of RTX aren't just trying to downplay it in some anti-nVidia agenda or somesuch.

And Nvidia I'd not the only one or the first one to do rasteritzation/raytracing hybrid rendering GPU. The first is imagination/power VR.

And I think it is too soon for RTX, probably next next generation.
 
Last edited:
I like the way you're pro-RT stance highlights the negatives. ;) To me, the results look great and the performance runs fine on existing GPUs. That's a significant win, a great piece of tech, and a great early adoption of SDF which is far newer tech than raytracing (only 20 years old?).
ahhh, lol, not my intention, it was a lazy post so understandably people will assume things about my intentions here.

I'm not necessarily pro-RT because I think it's better. I just can't help but shake that feeling that we've been here before. Where the cost of quality per pixel is too high a price to pay, so instead we increase the resolution, texture detail etc. Every generation of next generation power, has been largely used towards moving the resolution up, or frame rate, or colour (those 16, 24, 32 bit shoot out back in the day).

HRT is an interesting topic of discussion because of precisely what it does, it amps up the load tremendously without burdening the developers in a restrictive way. It removes the concept of hacks, it removes all sorts of restrictions that hacks come with, and ultimately you're relying on the power of the hardware to crunch it through.

We talked a lot earlier in this thread about quality per pixel. A lot of people floated how great graphics would look if you could imagine 12TF of power put into a 1080p title. Well, i'm looking at it. It's Hybrid Ray Tracing.

The trouble I've had with this debate is that there is this assumption that non-RT methods could approximately generate the same results for significantly less performance hit. And that's where I wanted to shed the light onto the subject. That there are issues aside from performance that would limit whether or not a title could use a specific feature, or if it was necessary, the game would be designed around those limitations.

we often talk about why exclusives are generally 'better' looking games than AAA titles. That's because they only have 1 platform, they're fully funded, they can design the game however they want, around any limitations they want, and they won't be penalized for it. The idea that all these other non 1P companies could operate in a similar manner is unrealistic. 3P companies don't have that flexibility, if they want the game to do certain things, then you either don't have that feature, or you brute force it. Which, with the introduction of DXR, and DirectML it seems like the most direct way to get those high end graphics, on any platform, without impacting your game design, without the development baggage that comes with trying to hack your way through optimization.

But the stress of it goes back onto the hardware.

I dunno, we've been here before, like when the 4K consoles got announced. But no one bats an eye at the 4K topic anymore, it's pretty clear you shouldn't be going into next gen supporting only 1080p. But boy, did we have a large topic on 4Pro and Scorpio. We had all sorts of images and graphs saying if you don't have a certain screen size and sitting at a certain distance, you couldn't see the difference. We had people just straight up say they couldn't see the difference.

To them I say, make sure you're seeing 20/20 before you make that claim.

But with the HRT demos, we get similar response: "I'm not seeing a big difference"

But i'm sure, once you started playing HRT games all the time, you'd see a big difference jumping back. And this is like all things, people going down in resolution, people going down in frame rate, people going down in graphical quality.

I'm not entirely sure 'how' RT will be implemented. I just know that next gen can't be 4K and VR. We did that already. You can't sell CPU - look at how PS3 turned out.
 
Last edited:
ahhh, lol, not my intention, it was a lazy post so understandably people will assume things about my intentions here.
Your intentions are good discussion. Any good discussion needs arguments from both sides, but good discussion has those ideas considered and accepted/rejected/changed, rather than just both sides opposing each other stubbornly.

I'm not necessarily pro-RT because I think it's better. I just can't help but shake that feeling that we've been here before. Where the cost of quality per pixel is too high a price to pay, so instead we increase the resolution, texture detail etc. Every generation of next generation power, has been largely used towards moving the resolution up, or frame rate, or colour (those 16, 24, 32 bit shoot out back in the day).
I think that's underselling what's actually changed. With PBR materials and improved lighting and advanced AA techniques, this generation is not last generation at higher resolution. The question is where the advances are going to come next gen. RT could definitely be a game changer, but more advanced GPUs in general could achieve a lot too.

The trouble I've had with this debate is that there is this assumption that non-RT methods could approximately generate the same results for significantly less performance hit.
I don't think that's quite the case. IIRC LB asked how far traditional rendering without RT could go, and that led to a general question as to how much HW RT required and if that was directed elsewhere, could you get better results overall where 'better results' extends beyond raytracing. For example, the 4K that you state you cannot back on. If RT can only produce 1080 res with the hardware we'd see in consoles (your suggestion is 2070 level and AFAIK we haven't got benchmarks on that yet), while rasterising can produce true 4K with inferior but not totally crap reflections, is the latter option better? Or comparable, if just different? Would people prefer 30 fps with RT'd shadows and reflections and refractions, or 60 fps with these faked? We don't have any decent reference points at the moment, so we can only talk hypothetical differences, and us 'anti-RTers' (who aren't anti ray tracing ;)) are just trying to emphasis the scope of the unknowns which some on the other side of the discussion have hitherto been unwilling to acknowledge and just flatly go with "raytracing is better".
 
I think that's underselling what's actually changed. With PBR materials and improved lighting and advanced AA techniques, this generation is not last generation at higher resolution. The question is where the advances are going to come next gen. RT could definitely be a game changer, but more advanced GPUs in general could achieve a lot too.
No doubt, PBR materials and physically based shading was a major component of this generation, but those advances can work with a HRT pipeline.

I don't think that's quite the case. IIRC LB asked how far traditional rendering without RT could go, and that led to a general question as to how much HW RT required and if that was directed elsewhere, could you get better results overall where 'better results' extends beyond raytracing. For example, the 4K that you state you cannot back on. If RT can only produce 1080 res with the hardware we'd see in consoles (your suggestion is 2070 level and AFAIK we haven't got benchmarks on that yet), while rasterising can produce true 4K with inferior but not totally crap reflections, is the latter option better?
I don't think just because we go HRT the trade-off goes away. There still is a trade off, in this case to get to higher frame rates, or higher resolution, there needs to be a dependancy on some form of reconstruction or approximation technique.

And this is where my mindset is started to change on rendering; it seems more beneficial to me to have a world not impeded by hacks and approximated up in resolution or frame rate. Then it is to have a world impeded by hacks without any approximation to bring it up to resolution or frame rate.

Both of them are still approximations, so the question becomes which trade off would be better. I can see in some cases that it would be better for RT solution, and in some scenarios to go the traditional route. I think this is an honest answer. Not every game needs everything.

but.. the problem lies that if you don't put in any RT hardware, the above scenario doesn't exist. Where if you do put it in the hardware, it exists, with some slight compromises to power on the traditional rendering path.
 
Correction - "It seems to be more flexible that people are aware." There's no agenda in this discussion (other than a subsection of the pro-RT camp who'll just trumpet RT one-sidedly) and those questioning the value of RTX aren't just trying to downplay it in some anti-nVidia agenda or somesuch.
If you can't argue your case you can always mischaracterize your opponent's, right?

PBR can't reach its full potential with rasterization holding it back. The trade-off of graphical fidelity VS resolution/framerate already exists and will continue to exist next-gen. It's not specific to RT, any graphical inprovement is part of it.
 
The "full CU" in that patent application has "simple" ALUs with 4 times the execution units of the "full" ALUs; they do not disclose the ratio for the "small CU".

If you consider divisors of 3640, every combination is further divisible by 5, which makes possible the following configurations of CU blocks and full/simple ALUs:

104 × 35 (7:28), 52 × 70 (14:56), 26 × 140 (28:112), 13 × 280 (56:124),
91 × 40 (8:32), 182× 20 (4:16), etc
56 × 65 (13:52), 28 × 130 (26:104), 14 × 260 (52:208), etc

Bounding Volume Hierarchy is essentially a tree of bounding boxes - how much "research and experimentation" you need for a data structure that contains 2 memory pointers and 8 XYZ coordinates?

Yes, Nvidia's subdivision algorithm is proprietary and tied to their fixed function hardware, but what would be the use of letting the developers full control of the parameters, other that possibly stalling the BVH traversal hardware? NVidia's been doing their homework on BVH (see for example these four research papers), and if you really need your own structure, there are 3rd party OpenCL libraries.

Would it be correct to assume 4 FP32 operations per shader core? For example, a 3640 SP device @1.172 GHz would give you 17TF?
 
Correction - "It seems to be more flexible that people are aware." There's no agenda in this discussion (other than a subsection of the pro-RT camp who'll just trumpet RT one-sidedly) and those questioning the value of RTX aren't just trying to downplay it in some anti-nVidia agenda or somesuch.

I didn't mean for it to come across that way. Just pointing out that it looks to have more flexibility than people are aware of.
 
Related:

https://wccftech.com/exclusive-first-amd-navi-gpu-will-have-40-cus-and-is-codenamed-navi-12/

Navi 12 will have 40 CUs, no GCN.Super Simd then?. It has sense if true that Sony financied its development with 2/3 of the AMD GPU employees dedicated to it. As mentioned per the article AMD would have "freely" developed its next gen microarch in exchange of not allowing MS to use it.
To quote myself from Navi-thread:
Let's assume Navi 12 is released H1/19. There's no chance in hell AMD is releasing gaming high-end Navi in H2/20 - H1/21, that's just too big of a gap for same architecture. High end in H2/20 - H1/21 could happen if they do another Polaris-Vega combination, but not with Navi-Navi. I'm also quite sure Navi will still be GCN in every sense of it (I don't see 64CU as GCN-limitation, just the current implementations limitation since AMD has said they could change it (if they decide to build frontend built for more than 4 Shader Engines and Shader Engines ready for more than 16 CUs each))

Sony definitely didn't finance it's development or 2/3 of it no matter how it's twisted and Navi 12's possible CU count has absolutely nothing to do with Navi-architectures limits, financiers or development limits or whatever. Be it 40 CU, 64 CU, 12 CU or whatever doesn't affect the architecture development, it's just a matter of how many CUs you decide to slap in when you're building the chip layout, which are completely independent of other chips CU counts (to the extent of limitations of other parts, like current front-end being limited to 4 shader engines).

AMD designs their architectures for themselves, there's a snowballs chance in hell for any company to limit that. Big partners like Sony (or MS for that matter) can give input and possibly affect the design, but that's it - they're customers to semi-custom group and semi-custom group gets completed IP-blocks to play with, they don't dictate the design of the IP-blocks.
 
If you can't argue your case you can always mischaracterize your opponent's, right?

What do people expect, actually? This is the console section, RT might not be in the 2020/2021 consoles, it wont generally see any positive reactions as is.
Its like the fixed function pixel and vertex shaders from the GF2/GF3 area, im sure GPU designers will improve upon the technology as soon its possible.
 
What do people expect, actually? This is the console section, RT might not be in the 2020/2021 consoles, it wont generally see any positive reactions as is.
Its like the fixed function pixel and vertex shaders from the GF2/GF3 area, im sure GPU designers will improve upon the technology as soon its possible.
I'm sure RT will make its way to next-gen consoles one way or another. Will it be hardware accelerated? That's the big question.
 
That would be a shame really, if Nvidia reserves it only for the mid and high end that long (2070 and up). A 2070 should be more affordable though by that time.

It would be a shame. How low can anyone reasonably expect a $550-$600 card to drop in 1 to 2 years?
 
Status
Not open for further replies.
Back
Top