Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
So a tech demo looks better than an unreleased game using unreleased hardware, beta drivers and a beta sdk, and that's the standard for how all games will look for the next two to three years?
This tech demo only requires a meager PS4 Pro to run so a next gen console in two years time would surely be enough to have an actual game running with that graphics quality. And while the Tomb Raider RT demo was a beta, I doubt optimization and updated SDK etc could push it 4x faster. I'm not saying TR demo should be the standard for raytracing in the next few years but it's a strong indication of what level of hardware grunt is needed to run it. I can only imagine how much of a slaughter to the almighty 2080 ti it would be if RTX is used on next gen Tomb Raider games, more light sources, far more complex geometry, denser levels, shadows would surely require more rays to be traced than what's needed now. Someone please correct me if I'm misinformed because I don't see the logic behind how next gen consoles could come close to having the raw power to trace anything of such.
 
Tomb Raider is using secondary rays to trace shadows. We don’t know how many samples per pixel, or what optimization they’re using, how they’re de-noising etc. Without knowing what they’re doing how do we make any performance judgments other than knowing it runs in real-time?
 
Yeah until any of these RT titles launch and we get some benchies with RT on/off on the RTX cards and on AMD GPUs (to get a rough feel for if any of this work is likely to impact consoles) we're kind of going in circles. I think the announcement that the 2060 GTX will not feature the RT acceleration cores this further cements for me the idea seen on here that these are "go fast" stripes for high end cards largely targeted at the Quadro market but flipped on for the consumer space because it allows them to offer it to two separate high margin markets at once evening out the costs over both. There are a fair few parts of the OpenGL spec that are supported in the base hardware but are gated off by drivers for the Quadro market (a classic example is Wireframe AA) so even if the units are largely wasted in most titles they give the high end gaming cards a clear USP over the mid range cards while they can cane it on the Quadro branded models.

Speaking of the Quadro market, ChaosGroup the makers of V-Ray span up a demo named Project Lavina using RTX now while they are not making any commitment to releasing it into production they will be taking the lessons back for V-Ray GPU and may down the line release it. Haven't seen any others yet
https://www.chaosgroup.com/blog/ray-traced-tendering-accelerates-to-real-time-with-project-lavina
 
We don't actually know if Nvidia are going to put RT cores in mid-level hardware, do we?

It's an unknown but Nvidia referred to their raytracing cards as RTX but, from what coverage I've read, indicated the lower-end card will be the GTX2060 so I'm not expecting RT hardware.

Saying that high end pc hardware has a small user-base can be countered with the fact that high-end console hardware is small too.

Countered? What do you think is countered? I still don't understand what your are arguing or what your point is. What do Pro and X have to do with the proliferation of raytracing hardware by game devs? Are you engaged in some weird PCMR argument that my brain is tuning out? :???:
 
Are you engaged in some weird PCMR argument that my brain is tuning out? :???:

There was said that thanks to difference in hardware on pc, from lower end to high end, devs have to keep a baseline, it has to run on lets say GTX660 hardware too, thats mid-end hardware from 2012, and scale from there. This is the same on console though, devs will target the vanilla PS4/Xone, and scale from there to Pro and One X, hence thats why those new consoles dont offer more then resolution and fps increase, if the game gets patched or a pro version.
On PC if one wants theres the ability to play games in full native 4k, or higher like the REZ remaster in 16k, at a stable 60 or much highe fps, add AA/AF, and on max settings. RT is just another option that if a game supports it, one can enable it if its worth the performance decrease, the same goes for VR.

It has indirecly to do with the Pro and One X, as theres a big possibility Sony and MS might do the same thing again next gen. This is great in some ways but also saturates the experiences. Theres choices on both, but on pc you have much more choices/abilities, but at a higher price.

I personally think RT will get bigger and bigger, i mean, AMD and Intel will have to follow and implement something in their pc gpu's too, and down the line, around the time next gen consoles release, there probally is a new gen of amd/nvidia/intel gpu with improved raytracing performance, and more games will support it.
 
This is the same on console though, devs will target the vanilla PS4/Xone, and scale from there to Pro and One X, hence thats why those new consoles dont offer more then resolution and fps increase, if the game gets patched or a pro version.

Yes and no. Sony and Microsoft both have policies that prohibit games that only run on Pro or X. On PC, the reason to support older hardware is economic. The higher the technical bar for entry, the small your prospective market is.

RT is just another option that if a game supports it, one can enable it if its worth the performance decrease, the same goes for VR.

I think this is where you possibly don't understand. Raytracing is not merely another option like supporting high resolutions, texture filtering options or shadow quality, most of which are heavily supported through game engines and drivers. Raytracing means writing new code specifically for experimental Microsoft DirectX raytracing SDKs, which are likely to change. This means code you write now, or in three months time, may not work in twelve months time.

I personally think RT will get bigger and bigger, i mean, AMD and Intel will have to follow and implement something in their pc gpu's too, and down the line, around the time next gen consoles release, there probally is a new gen of amd/nvidia/intel gpu with improved raytracing performance, and more games will support it.

So do I but we're a long, long way from that. Just look the introduction of key 3D advancements over the last 10-15 years and see how long they look become become mainstream. Even thinks like moving from vertex shaders to pixel shaders, then several iterations of pixel shader API advancements. These took a long time to become common and the reasons for that are the same as those making raytracing currently an undesirable investment for most game devs right now.
 
Yes and no. Sony and Microsoft both have policies that prohibit games that only run on Pro or X. On PC, the reason to support older hardware is economic. The higher the technical bar for entry, the small your prospective market is.

I understand that, but in the end it doesnt matter why, economics or policies, people with a 4TF system are only getting res and fps increases on their much more powerfull pro in comparison to vanilla users. Game devs developing their games now for pc are kinda having GTX660 gpus as their minimum, and thats about where the PS4 is somewhere.

So do I but we're a long, long way from that.

Yes might be true, but Nvidia has started it now and wont just forget about it, the next gen GPUs from them and AMD/Intel probally will have better RT support then a 2080 has now. It remains to be seen if it will take off in two years or maybe 6, and if consoles will get any hardware RT support. They got 4k, HDR and VR marketing covered, now they can market about RT maybe, something new has to come to the next PS5 and xbox to get people excited?
 
Can't see the 20xx having much of an impact for next PS/Xbox tbh. These cards can barely handle even the most simple implementations of RTX in existing software. For next gen consoles faster rasterization is the way to go, fully fledged RT is 3-5 year off. It may very well be supported in some way for consoles but I can't see dedicated RT hardware being a thing yet, waste of space and money. I would say something like DLSS is more probable given the hardware limitations.
 
I understand that, but in the end it doesnt matter why, economics or policies, people with a 4TF system are only getting res and fps increases on their much more powerfull pro in comparison to vanilla users. Game devs developing their games now for pc are kinda having GTX660 gpus as their minimum, and thats about where the PS4 is somewhere.

And what's the significance of this for raytracing? :???:
 
If PS5 Pro does have RT, but vanilla doesnt, as speculated could happen, then theres the same problem as in pc space.
 
Which is why the only thing that makes sense to me, is to launch genuine next-gen with RT in 2024 with full BC if you ever bother with RT hardware in console space. It does not make sense to do mid-gen updates with RT hardware in 2024 since games still need to run on RTLess hardware from 2020.
 
Which is why the only thing that makes sense to me, is to launch genuine next-gen with RT in 2024 with full BC if you ever bother with RT hardware in console space. It does not make sense to do mid-gen updates with RT hardware in 2024 since games still need to run on RTLess hardware from 2020.
Will raytracing ever make sense for consoles?
The issue I have with this is that having some raytracing capability was never really the problem. The problem was to make use of it in conjunction with existing pipelines in an efficient enough manner.
The overwhelming trend in computing is, and has to be, efficiency. At a given budget in dollars and power, what are the best results that can be achieved? And if someone manages to produce similar results at lower cost, they win in the mass markets.
That a 2080Ti runs Battlefield V at 1080p says a lot.
Ah, you say, but future generations will devote more hardware to the problem, increasing performance!
Sure that’s possible for a few generations more, but then those resources could be spent speeding up existing approaches, or you could choose to save power and money and thus reach more customers.
So the question then becomes what raytracing brings to the table that other approaches cannot do, and for the most part, that seems to deal with refraction (as opposed to reflections). Which, to be honest, just isn’t particularly important even in scenes where it exists, much less for game play. I have a pair of glasses lying in front of me on a table with small puddles of water, and looking at it I’m not even noticing refraction, much less capable of imagining a scenario in which this would bring something significant to my gameplay.

Before it can be demonstrated that raytracing can be bring significant advances, at comparable or better efficiency than existing and future alternative methods, it just isn’t competitive. I’m not sure those conditions will ever be fulfilled. Targeting low cost mass market devices just lowers the probability futher.
 
If you could replicate the effect of Raytracing wouldnt anybody have done it already? I mean Raytracing helps developer and content creator to reduce the cost of their product, speed up the the making process and result in much better quality. The only reason why Rasterizing exist is performance.

I dont see the problem with 1080p. It is perfect fine for consoles. Upscaling and something like DLSS will reduce the negative effect of the lower resolution. On the other hand more developers are able to put better graphics into their games without increasing the cost too much.
 
Before it can be demonstrated that raytracing can be bring significant advances, at comparable or better efficiency than existing and future alternative methods, it just isn’t competitive.
True, realtime global illumination. Unified lighting provides the greatest visual cues to make a scene look solid and believable. All lighting to date has been hacks upon hacks to make the lighting work, with baked lightmaps and SSAO and shadow maps. It's a collection of awkward kludges that should be replaced with an elegant, effective solution when possible. This would make games more visually appealing, to the point that it's the ultimate objective (once we have realtime photorealism, we're done!), and devs lives easier.

Raytraced illumination on top of rasterised models may well be the best compromise of performance and quality.
 
Bi-directional path tracing gives good results, some people @ IMG should be able to tell you how many rays per seconds you need to get good enough results.
 
So much work has been done over the last few years with regards to de-noising low-sample-rate ray-tracing. I think that's why there's suddenly a push to bring ray tracing to real-time.
 
I feel we are getting into the end game of "90% of the effort for 10% of the distance" phase of gaming graphics personally and would rather see more juice being spent on physics, AI, audio or other areas frankly. I mean we're talking mostly here about a war of the kludges, are the myriad raster methods better/worse than low sample rate denoised RT operating from a low LoD version of a scene?

For the consoles for the next few years at least I would see the former as making most sense, I mean de facto Nvidia is saying this is not worth doing on anything but the most lavish rigs by eliminating it from the mid range cards.
 
Can't see the 20xx having much of an impact for next PS/Xbox tbh. These cards can barely handle even the most simple implementations of RTX in existing software. For next gen consoles faster rasterization is the way to go, fully fledged RT is 3-5 year off. It may very well be supported in some way for consoles but I can't see dedicated RT hardware being a thing yet, waste of space and money. I would say something like DLSS is more probable given the hardware limitations.
Even more than 3-5 years. The RTX cards can barely achieve RT hybrid solution at 1080p. Well it looks nice, but in the demos they turned RTX off and even didn't try to fake some of those effects, which would normally be done even if something is out of the scene you can fake it. Fakes would really reduce the effect that RT has now in the demo.
And btw, RTX has another problem. The chip size. Well 7nm may be on the horizon, but than in a few years maybe 5nm and 3nm but than it is over with the current techniques. And you can always pack more traditional units in this and make even better "fake"-calculations than today. Also resolution with the traditional way is way above of this low-res ray traced demos.
What is interesting about raytracing support in real time would be to get a real time preview of the scene you develop (e.g. for a movie). Therefore RTX-cards are really cheap.
If they don't add those chips to the smaller cards (e.g. 2060s) there wouldn't be many customers that take advantage of those effects, so the custom support would be dead in the water like GPU-based Physx. If they add it to the samller cards, they can't reduce the "extra"-chip space because the RTX unit is already to small, or does really anybody want a 480p/720p picture with a new card?

For consoles this is far to expensive. Chips will already be more expensive because of ryzen. 7nm won't be cheaper and memory is another cost-factor. If they really want to bring out a 399€/$ console at launch it can't have something like this on board. Only if AMD invents support for this via their shaders or something like that.
 
I'd love to see raytracing coupled with foveated rendering. That'd free the power limits, focussing on 10% of the screen, meaning proper realtime raytracing could be performed with full, unified lighting. Machine-learning reconstruction could readily fill in the low fidility blanks around the fovea portion of the display. With foveated rendering, rasterisation hacks at their best may hit photorealistic but raytracing would solve all the production aggro.
 
Status
Not open for further replies.
Back
Top