The need for sustained high throughput loading of data in games? *spawn*

You can land/crash anywhere in the game. You can't predict where a player will land.
That's not providing a video. Have you seen video evidence of people crashing in a city and the asset quality at that street level?
If your house is part of a city that has been scanned, you can distinctively recognise it. Some users even report seing their car in front of their porch.
But at a low LOD. Saying you can land on your house is claiming a far higher LOD, or that the house is poor quality.

Seriously, it shouldn't be that hard for you to dig up a video/screenshot you've already seen of the level of detail you're claiming is present. ;)

I'm tied of looking now. Every single view I ever see of human habitations is either at high altitude or in the airports. I've searched and found things like this article and these and these. There's nothing event remotely detailed and all building at this LOD can just be boxes. You really need to present some visual evidence at this point, rather than making claims like you can land on your house or can crash anywhere without any hard proof. Without evidence to the contrary, I think FS2020's detail is really just lots of boxes with photogrammetry textures pasted on top and the streaming demands aren't at all as high as other games.
 
Salient question for me is that if platform exclusive relies entirely on ssd streaming can one or more games be achieved that would be better than what was previously possible? Would such reliance to streaming be burden or easement for game developer? I think we will see at least some games doing this. Spiderman2 whenever it comes out is likeliest candidate when remembering how advanced streaming system the ps4 version already has. Many games might just see the improvement in load times/quick travel and not really rely on really fast streaming on per frame basis.

My 2 cents is that I believe (not near) future is going to be neural network/machine learning driven speculative loading/caching. Basically implement some neural networks. One neural network to predict player action/movement. This will give high level overview what might be needed 1-2s later. Then build another neural network on top of player prediction to manage what to keep/eject from cache. Keep/eject would likely have features like trying to minimize user noticeable artifacts. It would be ideal if the neural networks kept learning even after game release.

Train networks by playing the game and self play. Ground truth is misses/hits/unused/used assets. Make this training a GAN like approach and the neural networks will get better than human programmed heuristics. Especially so as we can likely give much more input to neural network to sort through versus what hand programmed heuristics could use. I don't see any reason why dnn/ml approach wouldn't destroy hand tuned heuristics. To me this feels very similar problem as what chess/go is/was and dnn/ml would do wonders there.
 
When describing storage as being as fast as RAM, latency is part of the equation. Storage at 400 GB/s with a one minute latency would never be described as 'as fast as next-gen console memory'. ;)

Is that the latency for the user to flip the disc over when it's half-way through?
 
That's not providing a video. Have you seen video evidence of people crashing in a city and the asset quality at that street level?
But at a low LOD. Saying you can land on your house is claiming a far higher LOD, or that the house is poor quality.

Seriously, it shouldn't be that hard for you to dig up a video/screenshot you've already seen of the level of detail you're claiming is present. ;)

I'm tied of looking now. Every single view I ever see of human habitations is either at high altitude or in the airports. I've searched and found things like this article and these and these. There's nothing event remotely detailed and all building at this LOD can just be boxes. You really need to present some visual evidence at this point, rather than making claims like you can land on your house or can crash anywhere without any hard proof. Without evidence to the contrary, I think FS2020's detail is really just lots of boxes with photogrammetry textures pasted on top and the streaming demands aren't at all as high as other games.

I can't provide the alpha videos to which I have been made privy due to NDA. You can find the publicly available videos/pics at AVSIM. But yes, you can indeed recognise your own house in the sim.
Also, believe whatever you want. Proof of it will be shown in five months time.
 
Personally I don't get the thinking that a fast SSD is somehow going to enable greater detailed worlds in games... PS5 Memory Bandwidth: 448GB/s VS PS5's IO Throughput: 5.5GB/s (Raw), Typical 8-9GB/s (Compressed) gives you the answer to this question. It's going to cut down load times and also make life easier for devs. SSD's are still one of the most exciting additions to next gen consoles but not for the reason most are hoping.
 
I can't provide the alpha videos to which I have been made privy due to NDA.
Fair enough. I'll wait and see.
Also, believe whatever you want. Proof of it will be shown in five months time.
Belief doesn't enter into it. This is a technical discussion on B3D. Arguments should be based on evidence and data. If you can't present that data, you can't make a technical argument and should just say up front that there's zero evidence so people like me don't waste a fair bit of time trying to find out what FS2020 is capable of.
 
Belief doesn't enter into it. This is a technical discussion on B3D. Arguments should be based on evidence and data. If you can't present that data, you can't make a technical argument and should just say up front that there's zero evidence so people like me don't waste a fair bit of time trying to find out what FS2020 is capable of.
I'm glad I'm not the only person who felt like this. One of my user lists got a little longer today. :-|
 
SSD's are still one of the most exciting additions to next gen consoles but not for the reason most are hoping.

If it was, they would have shown that by now. So far, nothing yet that offers that leap that overshadows going from 2d to 3d, hardware T&L, or the jump from PSX to PS2, PS2 to PS3, or even PS3 to PS4 which was more impressive (KZ shadowfall etc) then what we have seen so far going into next gen.
 
FS has a whole sim to run, dynamic weather and ToD etc..
But for terrain rendering details only as discussed here, the tech is not much different, slightly more refined i guess.
Yet it's still in alpha, we'll see how much better the final product looks

 
Fuck that!!! Scenery!!! Have you seen games like TLOU2 and TD/TD2? It’s been years since I’ve come across a well funded game that came off as repetitive in terms of scenery texture use. Poor LOD transistions? Pop ins? Mildly irritating, yes. But hardly game breaking.

Now enemy NPCs!!! Damn it!!! Literally every game you spend the majority of your time interacting the same group of septuplets and octuplets all game long.

If this streaming tech does something about that, I would consider it the greatest gen ever!!!

LOL.
 
Last edited:
I can't provide the alpha videos to which I have been made privy due to NDA. You can find the publicly available videos/pics at AVSIM. But yes, you can indeed recognise your own house in the sim.
Also, believe whatever you want. Proof of it will be shown in five months time.

This shouldn't be surprising in the least. The highest detail levels of Google Maps satellite view can already let you pick out individual cars as being different body types and colors. The key to flight sim 2020 isn't the resolution of the assets, just that the install size for the whole planet would be absolutely enormous. For most games online assets streaming is nonsense, but here it's essential, unless you have twenty spare terabytes or whatever they said the final size was just laying around.

Sadly both game budgets and the overhead cost of this will mean it's not going to show up in almost any other game. Hell they're getting most of the texture and land data and etc. from satellite photos and whatnot, already probably a pretty big cost but since they already have it laying around from their mapping division, hells why not use it? I mean, that is what all the base assets are from, don't mistake it for a moment, did you really think a dev team actually recreated a high res version of the entire planet? C'mon.

But the visuals do showcase stuff MS has been demoing for a long, long time. Point cloud reconstruction of architecture based on multiple random user photos, etc. etc. No doubt they spent extra time on all the landmarks, trained an AI to identify general tree types and then spawned models in where the satellite photos showed them, or something similar. But the whole game is definitely heavily based on mapping data and uses that to some large extent. Hells, you can see how much more detailed Google maps is in that video above for areas not specifically touched by the devs. No 3d model of cars there generate by AI and multiple aerial photography camera angle or however they do it, just a flat ground texture from a satellite photo.
 
Last edited:
Does virtual geometry work in the same way that virtual texturing works? If yes, then I could imagine that SSD I/O throughput will be highly important.
 
Does virtual geometry work in the same way that virtual texturing works? If yes, then I could imagine that SSD I/O throughput will be highly important.
Sounds like it. The Unreal guys said something along the lines of they designed their new engine with the PS5's SSD in mind. Now how much of that will they need and are we also talking about the IOPs per sec. rather than merely just bandwidth who knows.
They also said they didn't need huge amounts of bandwidth at least with nanite but we don't know what that means.
With a lot of handwaving and oversimplification maybe we are looking at some engines that will basically balance compute vs storage bandwidth looking for a way to leverage one thing if the other is lacking. If we are talking more about sort of transforming some large geometry/texture/??? data set into a stream of data that kind of paints the field of view in front of the player.
Sometimes you don't try to encode the data too tightly if your compute resources you have to play with isn't up to snuff or if your bandwidth is limited throw more compute to decode a more elaborately crafted data set.
 
Now enemy NPCs!!! Damn it!!! Literally every game you spend the majority of your time interacting the same group of septuplets and octuplets all game long.

If this streaming tech does something about that, I would consider it the greatest gen ever!!!

LOL.
NPC AI doesn't really sell games so it's not focused on. None of it needs better hardware. Every gen is the same because the bottleneck is dev time, not hardware.
 
Back
Top