Next gen lighting technologies - voxelised, traced, and everything else *spawn*

I don't see how a next gen game is going to be so much different in the RT part, even if developed for just the PS5 console. All MS games are PC too so there you go.
And how do you think it's impossible for MS to not-cross-develop between Xbox and pc with RT as a requirement? RT gpu's are being quite ancient late 2020, 2.5 years by then. People will have to adopt to SSD too sometime, and 8 core CPU's etc.
It was your argument that RT makes it easier and that it seems to be important for consoles. I'm merely providing a counter that it is actually not easier as it adds extra work, not less. And you then stated that consoles seem to see it as an important feature so I explained that it's much easier for devs on the console.

I never stated that it's impossible for MS for cross-development? Of course MS will have to develop the game with standard lighting/shadows when releasing on PC as well. But for games solely released on a particular console, all I'm saying is that you're free of those constraints.
 
There will be games developed for the xbox with RT (like halo perhaps), and those games will be on pc, with most likely even better RT implementations. Hardware requirements will simply increase, same for the SSD. Making things easier isn't only important to one platform though.
 
If anything this is going to leave GPUs without RT in a very tough spot, they might be running a seriously handicapped versions of the console ports .. especially lazy ports ..
 
I dont see it having any impact until 2023. What developer or publisher wants to abandon an install-base of 150 million gamers to sell to 20-30 million the first year? (PS4 and Xbox One)
 
If anything this is going to leave GPUs without RT in a very tough spot, they might be running a seriously handicapped versions of the console ports .. especially lazy ports ..

Same with SSD's i guess.... and less then 8 core cpu's. Can't think off many that run without SSD these days, or a less powerfull then a 8 core @ 3.2ghz boost by end of 2020. RT gpu's have been here since 2018, by late 2020 they have been here for 2.5 years. It's just that we have been hanging on 7870 requirements for too long now. My pc is ready for the most, il upgrade if needed, but 3700x is faster already, so should be a 2080. Nvme optane is quite fast too i think.

DF did a nice video on what they think will be comparable.

Oh and, MS's exclusives (pc/xbox) will force people to upgrade.
 
I dont see it having any impact until 2023. What developer or publisher wants to abandon an install-base of 150 million gamers to sell to 20-30 million the first year? (PS4 and Xbox One)
There are some, not the majority of course, but you will see some abandon the old gen pretty fast, particularly if their scope for the game is too large/complex to be made on old consoles (old gen lacking an SSD will be a huge factor to this), or if they are indie devs with limited budget.
 
And as usual the dynamic pc will follow suit, i don't see any problems, like how it went any other gen. The PS5 will probably offer the best exclusives as with other generations though. But it's not the PS2 days anymore, this gen we had about 10 big exclusives if even that (some even get ported). How they will take care of the RT is yet to be seen, but i doubt it will be very dramatic, as the hardware isn't really ready for it, exclusives or not. We might see what were seeing now with RT games, perhaps abit better, abit improved. And uses for audio, VR etc.

A potentional HZD2 is going to look awesome, RT or not.
 
I dont see it having any impact until 2023. What developer or publisher wants to abandon an install-base of 150 million gamers to sell to 20-30 million the first year? (PS4 and Xbox One)
This might be inevitable; the RT can be switched on and off. That's a nice thing about it; in theory you can still develop 2 rendering paths; expensive but doable. Perhaps not that expensive. Just keep the current rendering path and add an all new RT path. Possibly double the work. But doable.

CPU, memory and hard drive speeds cannot. So they'll need to design down across the board to keep last generation around.

For all the hype around the new CPU, hard drive etc; would be a debbie downer that developers would purposefully lock themselves to last gen.
 
What's worse than Debbie Downer? They could purposefully limit the design or scope of the game so it could even run on the Nintendo Switch. The market potential for PS4/XBoxOne/Switch has to be staggering numbers compared to next-gen only 2 years in.

Like you said, they could scale the graphics engine and run with dual engines, one targeting next-gen and one for everything else with more limitations on the Switch. I think the real game changers are from having a CPU and Streaming Asset speeds that could enable entirely new technique. I hope there are some developers (outside of just first party) that opt to push the boundaries and explore what's now possible from those advances -- even if the realist side of me doubts that it will.
 
It's only less effort if you could do thousands of rays per pixel. That will never happen, because till then we'll find enough other stuff worth our effort. :)
Well it is also low effort since it is a base UE4 feature you just turn on and off. Much like setting shadows to a higher res or SSR to more samples, etc.

Low effort RT IMO! Nothing to decry.

Also that non-trailer video there is DX11 so it doe snot have RT.
 
Well it is also low effort since it is a base UE4 feature you just turn on and off. Much like setting shadows to a higher res or SSR to more samples, etc.
But this argument applies to the whole renderer and the rest of the game engine. The extra effort is still there but outsourced. (To a team that has to serve every platfrom from mobiles to high end PCs, which will not help RT on the other hand.)

BTW, i also assume additional fragmentation with upcoming API tiers. E.g. traversal shaders really seem a game changer to me, but Volta will not have them, Not sure about Ampere but hope so. AMD very likely (if TMU patent is their plan), and Intel also likely because the LOD paper is their research.

It will take time... But the worst thing that could happen is additional RT effort becoming one more argument for more AAA studios to move to UE4, Unity etc. I see it makes sense to share work, outsourcing to specialists, etc, but tech development always was one key element of game development.
Not sure if we want to loose this. I expect slower progress if all games use the same engine and games more similar to each other.
 
But this argument applies to the whole renderer and the rest of the game engine. The extra effort is still there but outsourced. (To a team that has to serve every platfrom from mobiles to high end PCs, which will not help RT on the other hand.)

BTW, i also assume additional fragmentation with upcoming API tiers. E.g. traversal shaders really seem a game changer to me, but Volta will not have them, Not sure about Ampere but hope so. AMD very likely (if TMU patent is their plan), and Intel also likely because the LOD paper is their research.

It will take time... But the worst thing that could happen is additional RT effort becoming one more argument for more AAA studios to move to UE4, Unity etc. I see it makes sense to share work, outsourcing to specialists, etc, but tech development always was one key element of game development.
Not sure if we want to loose this. I expect slower progress if all games use the same engine and games more similar to each other.
I am a bit confused - how is that different than any other feature from a middleware engine? As I see it, the point of the middleware engine is to offload the research and development and highend features to the engine team itself, which then presents it to a game developer in a more abstracted fashion in exchange for money.

So for example, the character shading work Epic did for Digital Humans or Paragon... all of that R and D time was on Epics side and then rolled into an update in a user facing form for developers to use. Hence the whole middleware thing.

Ray Tracing in UE4 is the same way, and I do not see anything bad about it. Rather normal as I see it.

As far as I know - we have no idea whether or not Ampere will support possible DXR 1.1 changes. I did ask a developer though about RT LODs and they mentioned how IHVs in general are on board for it.
 
I am a bit confused - how is that different than any other feature from a middleware engine?
It's not. JoeJ's concern is it'll encourage more devs to use middleware instead of their own bespoke, more optimised engines. Historically, consoles could punch well above their weight because they had leaner, optimised use of the hardware without software bloat and overheads. If we move to middleware, that advantage is severely diminished.

It's quite possibly unavoidable, along with multiplatforming and one-eye-on-mobile and the-other-eye-on-cloud-streaming, changing the console gaming landscape based on economics. Same way the consoles are homogenising towards being PCs. It has upsides and downsides, but clearly the 'romance' of consoles and all that makes them special will be lost in this new future.
 
It's not. JoeJ's concern is it'll encourage more devs to use middleware instead of their own bespoke, more optimised engines. Historically, consoles could punch well above their weight because they had leaner, optimised use of the hardware without software bloat and overheads. If we move to middleware, that advantage is severely diminished.

It's quite possibly unavoidable, along with multiplatforming and one-eye-on-mobile and the-other-eye-on-cloud-streaming, changing the console gaming landscape based on economics. Same way the consoles are homogenising towards being PCs. It has upsides and downsides, but clearly the 'romance' of consoles and all that makes them special will be lost in this new future.

Game devepment has gotten more expensive/costly, development time also has increased. Multiplatform is a thing (things need to run on nintendo consoles and mobile even). We will see a hand full of sony exclusives exploiting the hw like this gen, and some from MS.
 
I agree with your points, but look at the downsides as well.
Taking UE as the example, it started as a copy of the Quake engine. And after time they added more features to it, aside tools for content creation.
Meanwhile, id completely abandoned the idea of baked lighting and came up with Doom3. First game with fully dynamic lighting and proper shadows - real progress.
It then takes some time until UE adds those or similar features too, and even more time until those features work robustly. (Remember character shadows on wrong places in first UE games featuring shadow maps.)
Meanwhile id abandons the idea of fully dynamic lighting and goes back to baked, but now they show individual texture on every surface in a huge open world.
Crytek shows big progress too with their completely dynamic lit huge open worlds, coming up with SSAO and first dynaimc GI.
UE also adds similar methods to compete. Many features, some incompatible with each other, but it works and grows.
But efficiency suffers, it becomes bloated and out of date, and unable to compete custom engines on current gen consoles, they decide to go public, target indies, and also for the first time they also show real progress with TAA and PBR too.

Nothing wrong with this. But now adding RT it is again just one more optional feature (or many) on top. The alternatives still need to be maintained. Still nothing wrong - they have enough manpower to do so.

To see how wrong this might be, you need to compare it with something like Path Tracing where a single algorithm calculates all lighting. In contrast, in games we have dozens of methods, each faking just one piece of the puzzle, each having it's own perf cost that sums up.
Now i don't say PT makes sense for games, but custom engines have it much easier to do things differently, implement new methods while abandoning old ones, coming up with more efficient results, achieving progress.
So i guess the trend will continue we will see new things in custom engines first, while UE ofc. makes it easy to enable RT effects for many other games.

Ray Tracing in UE4 is the same way, and I do not see anything bad about it. Rather normal as I see it.
Yeah, but normal / state of the art is not progress, it is the result if it. That's why it is important at least big studios have their own engines, and using middleware only for things where current state of the art is good enough for them. (Either this or having more competition in the middleware segment.)
I don't mean UE would be bad in any way, and sorry if my historical example is not accurate.
 
Savings from RT is probably more relevant in art/design/lightning side than coding side. Less tweaking, hacking, baking and faking.
 
*Until you have artists that want to hack stuff for a particular scene/level because art.
It is much easier to tweak a realistic renderer to your vision of art, than it is with a hacky / faky renderer.
I hear this argument quite often but it's completely misplaced.

Instead i would like to turn the arrow around and ask you this:
Assuming games can do whatever they want, why do we see cool special effects so rarely?
For example: Terminator alike morphing from fluid metal to a robot, or a character that morphs from man to woman, or the trippy tunnel visions like in 2001 - Space Oddyssey.
It seems the movie industry is much more creative here, even if those things would be so much easier for us. All we come up with is ugly rim lighting to highlight important objects (which you still can do in any case.)

If there ever is a lack of creativity, progress towards realisic rendering / simulation / AI is not the reason. This only adds new options and opportunities.
 
That's expected because GCN is bad with random access. Navi fixed this with the new cache hierarchy, but seems NV still has the edge here, assuming it's the very dominant factor for this interesting benchmark.

BTW, according to https://github.com/sebbbi/perftest Navi seems even faster with random access than linear?!? - but maybe that's just some noise. (Random has still small offsets in his tests, smaller than what RT would cause.)

Navi:
StructuredBuffer<float>.Load uniform: 9.047ms 1.395x
StructuredBuffer<float>.Load linear: 5.461ms 2.310x
StructuredBuffer<float>.Load random: 4.722ms 2.672x

GCN3:
StructuredBuffer<float>.Load uniform: 12.653ms 2.770x
StructuredBuffer<float>.Load linear: 8.913ms 3.932x
StructuredBuffer<float>.Load random: 35.059ms 1.000x
The RDNA did change the memory path so that it delivers 32 addresses/clock, which differs from GCN's 4 addresses/clock (16 coalesced). That might play a role in why random loads improved the way they did. GCN's coalescing was pretty limited as well, or at least my impression was for some past GPUs that the window for address coalescing was limited to the 4/16 being considered in a given cycle, which could miss accesses a few lanes away. Navi's evaluation window may be widened due to its 32-wide handling. As for why the linear patterns may take longer than random, perhaps there's some other resource those patterns can exhaust now that the addresses are not so constrained.

Up to a maximum of 5 instructions can issue per cycle, not including “internal” instructions.
This is potentially not entirely the same for Navi. The Navi whitepaper doesn't make the same overall claims about instruction issue, and its diagram has fewer arrows coming from instruction arbitration That might be just for the purposes of layout in a diagram that is pressed for space, however.


Maybe this is just about sheduling the ALU-SIMDs, but other units like scalar or memory have their own shedulers?
If there's an instruction type for which s_waitcnt must be used, that's a decent indicator that there's another domain/sequencer involved.
The major counters are vector memory (now split into store and load counters for Navi), LGKM (LDS, GDS, scalar mem, message), and export. There is an implied VALU count as well, or there was.
There's more complexity than that since there's multiple categories within LGKCM, and overlap between counters for things like flat addressing (LDS and VMEM) and potential overlap between GDS and export.
That there's certain ordering guarantees or the lack thereof between types may indicate other independent domain types.

This has more explicit effects for Navi thanks to the s_clause instruction, which allows a wavefront to monopolize certain CU functions that would otherwise be shared between wavefronts: VALU, SMEM, LDS, FLAT, texture,buffer,global,scratch.
The latter group seem to align with the vector memory path, though potentially there is contention with the LDS domain since FLAT is a combination of the two.

The scalar unit itself seemed more clearly to be independent in GCN, since it was shared between SIMDs and had a raft of NOP and hazard warnings when it came to dependences between vector and scalar paths. Scalar ALU and scalar memory operations seem to fall under the same issue path, although there are indications that there is another layer of scheduling complexity with how the scalar cache cannot be relied upon to return values in-order.
 
Back
Top