VFX_Veteran
Regular
LOL!
I didn't know we were tallying up triangles per game level when that's not what's being rendered in the camera frustum.no idea we will have to wait for tech paper I suppose, Note Im not claiming there is a billion triangles on the screen, Im saying the world prolly consists of at least a billion tri's at LOD0, eg perhaps that grass tuft is 2000 tri, now if theres 10s of thousands of them in the world well thats a lot right there, it rapidly adds up, the world in my personal game has ~30 million tris and its no where the level of what I've seen in R&C not even in the same ballpark
I didn't know we were tallying up triangles per game level when that's not what's being rendered in the camera frustum.
heheI realized some people were interpreting his intent wrongly, but it felt pretty obvious to me he meant "source assets geometry" and not actual rendered geometry per frame since that was the only possible interpretation that made any sense and was not internally contradictory. It's also what the unreal devs meant when they were saying models had XYZ polys during the UE5 demo. They were pointing to source assets. They made the same "slight of hand" when demoing UE3 back in the day.
hehe
even with REYES, you're looking at a maximum of 3,686,400 drawn triangles on the screen @ 1440p. REYES with Native 4K 8,294,400 triangles.
I know we don't like resolution counting anymore because it's affect on image quality seems to be not as important as quality per pixel, but in an ideal world, native 4K is going to be significantly more detail than 1440p with REYES.
I think that's sort of the challenge I think going forward for video games, I just don't see assets being that large, or even shippable for a full length 8-12 hour game where the player can do some exploration; most of these triangles being discarded as well.
"That lets us devote all of our system memory to the stuff in front of you right now,"
So if you believe what's being said there then they're using the full 16GB to render the current viewport only.
I can not believe it.
First, there are obvious things like: Game world needs sime simualtion - AI, physics, etc., and nothing of this can work if you clip away your backside. Now with RT this goes into graphics as well - R&T reflections would not work if they would indeed only have the frustum stuff in memory.
Ofc. those things can do with lower LODs and are maybe out of context of the quote you posted, but still - not clarifying this context makes the quote harder to take serious.
Second, i still don't believe SSD and IO has so low latency the loading could catch up with fast moving viewports. Loading and uncompressing big about of assets within a single frame.... is this indeed possible?
That's what i meant - did not intend to do nit picking.Pal, try to exercise some less literal interpretations of things when its obvious people are using figures of speech or over-sinplification to just make a point in a tweet.
I think it’s a poor example to use the ‘as the player turns’ comment
Now - let’s random fast travel in a game to a new part of the map - or to another planet- or beam down from a ship to planet or in an massive open world have super zoom to extreme details miles away (etc). The speed will be directly limited to the data transfer speed.
If Sony and MS hadn’t put all this effort in the IO systems then you’d have to have all games designed around slower data streaming and that expensive extra memory would take even longer to fill up, which will mean similar scenarios like we had last gen in game design.
Yes I remember the whole Gears of War bullshots very clearly, with the final game looking nothing like them .I realized some people were interpreting his intent wrongly, but it felt pretty obvious to me he meant "source assets geometry" and not actual rendered geometry per frame since that was the only possible interpretation that made any sense and was not internally contradictory. It's also what the unreal devs meant when they were saying models had XYZ polys during the UE5 demo. They were pointing to source assets. They made the same "slight of hand" when demoing UE3 back in the day.
The UE5 demo is a display of fast baked data into a scene to test SSD->VRAM speed.
Voxel cone tracing isn't new. The mere fact that you have to have voxels means it's an organized data structure where accuracy depends on the number of voxels for a reasonable approximation. A ray has infinite precision. They just aren't comparable. Also screenspace is what we are trying to get away from this gen. RT doesn't have the limitations of screenspace rendering.
It's an example of how the ability to keep more data in VRAM can be advantageous over being able to fill VRAM up quickly (but still much slower than the data already being there).
All of these things represent potential advantages to having a faster IO, no-ones denying that. As I said, each solution (more VRAM + slower IO vs less VRAM + faster IO) has it's pro's and cons. The idea that I'm pushing back on that smaller VRAM + fast IO is universally better under all circumstances.
*snip*
Am I saying a 32GB PS5 with a SATA SSD would have been the better design choice? No. I'm simply saying it would provide a different mix of advantages and disadvantages vs the current design.
I think the concept that SSD actually solve an important graphics bottleneck is still much less prevalent than that more VRAM or bandwidth or CUs help to improve graphics.
This sounds like you're underestimating Lumen. If they don't lie, then it is not baked, so relation to SSD is only about the extra data they need to compute all lighting in game.
Saying RT has perfect (or even better) precision than alternatives is arguable. RT can use the same geometry we use for the frame buffer, but it can also use lower resolution geometry for optimization purposes. Or it can use simplified material models, or even solid color per object like Metro does for GI AFAIK.
I’m not sure you needed to push back though. I think the concept that SSD actually solve an important graphics bottleneck is still much less prevalent than that more VRAM or bandwidth or CUs help to improve graphics.
As it stands, 16GB of data can be replaced in 2 seconds. But game worlds aren’t built in visible blocks of 16GB necessarily, because that is not how it works or can work today either. But if you make a slow game with lots of corridors you can kee streaming in new data fast enough even with 50MB/s, a trick that was learnt already when we moved from Cartridge to CD based games and has been well developed. But it is a cumbersome design trick much like baking lighting into textures, hand placing lights, LOD transitions and so on.
If you can load 8GB/s, that means you can do 8/60 in a single 60fps frame, which is 133MB. That may not seem like much, but that means 7-8 frames is 1GB already, and with that you can load a significant portion of your visible data at high details.
Of course that’s not the whole story. An open world has repeating data all over the place. It is far too large to keep in memory no matter how much memory you have and if you are free to go in any direction it is hard to predict what to stream. Now consider if loading in higher detail models for close by viewing are still popping in because the renderer can’t handle the higher detail models yet, or if they can’t be streamed in fast enough. In reality while the previous was a bottleneck in the past, with today’s GPU power the latter is the bigger bottleneck now.
In addition, a big open world is not built from unique data every 2 seconds of travel. The world is littered with data that keeps reappearing - you don’t need unique tree models or animals or moss or plants or houses or planks, bricks and so on ... whatever. You do need to have a database of objects, and locations where they can be statically or dynamically and where they can be retrieved from at super high speed.
That combined with the component prices and the general load times improvements means that I think they made the right decision.
I'd answer it supports infinite bounces, which is pretty new to fully dynamic methods.What makes Lumen anymore accurate than any other GI solution?
You likely mean missing interaction of character and the wall. I think this is because skinned character is ignored for GI. It only receives light (probably form a probe grid) but does not cause occlusion or bounces.I can see in the demo that Lumen doesn't correctly handle occlusion while the object is in shadow. That's because the shader isn't computing the correct light propagation on the surface and normalizing that approximation with an occlusion term.
Pretty sure it does. I expect it works the better, the larger the area light is. Emissive material and arbitrary light shapes should just work. Quality depends on probe and occluder SDF grid resolutions.The other thing is that Lumen doesn't handle is area lights.
RT / importance sampling are not the only options to achieve this, but they are most efficient for high frequencies.I want a complete light loop for every light source (not just directional) and factor in it's size as well as a decent inverse-square falloff and I want my BRDFs to use importance sampling on the surface as well as rays shot from the light source and to the material using a proper PDF and sampling function.
I was just wondering why you were not impressed. Personally i'm not that impressed either, but result is much better then other dynamic GI tech from before. Enough for a generational visual leap, and also enough to eliminate manual work on fill light setup or baking times.This lighting is wrong. It's too uniform like all the other GI light probes techniques.
Sure, but for diffuse GI we can use an optimized representation of geometry, because small scale details don't matter much. It's fine to reduce detail here. (Currently, all shown dynamic GI solutions are laggy and very expensive.)That's you limiting the approximation. A ray is inherently as accurate as you want. A voxel can be accurate but you'll have scale it down to pixel size.
That's true, but you will spend so much time hacking your scenes and custom tailoring it to the dynamic behavior of your game. It's not the way to go for an engine that will be used by the masses like UE. Also, while you may get faster convergence to a solution to the lighting equation, it more than likely won't be a robust solution (i.e. applicable to any type of game with any kind of art direction).RT / importance sampling are not the only options to achieve this, but they are most efficient for high frequencies.
For global light transport lower frequencies are most important to get right, and here RT alternatives can do faster at the same quality.
RT is not the silver bullet to solve all lighting problems just because it can do that.
Looks like good compromises to me. Again happy as artist - would only complain about missing sharp reflections. They say they'll make Lumen faster (easy: Just accept more lag), and they'll also try reflection tracing (worked fast with VCT approaches, but missing characters becomes some problem then maybe).
I'd love to know what big developers would rather have had if given the option:I’m not sure you needed to push back though. I think the concept that SSD actually solve an important graphics bottleneck is still much less prevalent than that more VRAM or bandwidth or CUs help to improve graphics.
As it stands, 16GB of data can be replaced in 2 seconds. But game worlds aren’t built in visible blocks of 16GB necessarily, because that is not how it works or can work today either. But if you make a slow game with lots of corridors you can kee streaming in new data fast enough even with 50MB/s, a trick that was learnt already when we moved from Cartridge to CD based games and has been well developed. But it is a cumbersome design trick much like baking lighting into textures, hand placing lights, LOD transitions and so on.
If you can load 8GB/s, that means you can do 8/60 in a single 60fps frame, which is 133MB. That may not seem like much, but that means 7-8 frames is 1GB already, and with that you can load a significant portion of your visible data at high details.
Of course that’s not the whole story. An open world has repeating data all over the place. It is far too large to keep in memory no matter how much memory you have and if you are free to go in any direction it is hard to predict what to stream. Now consider if loading in higher detail models for close by viewing are still popping in because the renderer can’t handle the higher detail models yet, or if they can’t be streamed in fast enough. In reality while the previous was a bottleneck in the past, with today’s GPU power the latter is the bigger bottleneck now.
In addition, a big open world is not built from unique data every 2 seconds of travel. The world is littered with data that keeps reappearing - you don’t need unique tree models or animals or moss or plants or houses or planks, bricks and so on ... whatever. You do need to have a database of objects, and locations where they can be statically or dynamically and where they can be retrieved from at super high speed.
That combined with the component prices and the general load times improvements means that I think they made the right decision.
I am not sure if it's public Info - but I think the PS5 maybe has less RAM available than XSX? Sony has not puvlicly given the Reservation numbers out for threads and RAM yet.First there is not 16 GB of RAM used for game on PS5. If it is the same than on Xbox, there is 13.5 GB and 2.5 GB reserved by the OS and using oodle kraken and texture the average uncompressed data is 10 to 11 GB/s it means between 1.22 and 1.35 seconds to fully load the memory. Some of the memory is reserved by the gameplay logic and simulation of the world.
We see in the last Ratchet and Clank video the loading from one level to another tooks 1.4 GB/s. The community manager told everything SSD related will be slightly faster on final game. This is ok a bit above 1 seconds to wait is not the end of the world. And it is like you said a cheaper and more clever option.
Other things it ill be easier to do portal where you can open a portable in any place in the game world and goes anywhere into the game world like for example in a Doctor Strange game and the character is rumored to be in the next Spiderman.