Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

And initial load times wouldn't have to suffer much as you wouldn't have to fill RAM with the entire game content just to start playing. You load what you need to begin and stream the rest in during gameplay.
Firstly how long would it take to fill the ram from a hdd.
Secondly, streaming is also limited by the speed of the storage device.
So you used R&C as an example, given you can portal through to different worlds very quickly, how many worlds could you prefetch/stream in, given it's not a linear route you have to take.

I'm personally not commenting on which was the biggest most important upgrade to this current gen. I'm highlighting the flaw that simply having more memory negates ssd.
Having to get that amount of data of the hdd wouldn't solve the game design issues that ssd does.

Talking about speed and latency of ram is ignoring the link in the chain of hdd.
More ram, can be used for many things, but as a solution to ssd not so much.
If was talking about more ram as buffer for slower ssd compared to faster one sure, example PS5 to PC ssd. HDD is whole magnitude different.
 
never said other parts of the hardware don't matter, i said the biggest leap in raw numbers this time over last gen is the data rate speeds 50x to 90x faster, and that's what is shown for the first time in a next gen only game with R&C for example. until now SSDs only served as faster loading levels and not in gameplay because devs always worked with HDD in minds for compatibility purposes, from now on things should be different, at least for a number of games in the near future.
if an openworld game like RDR2 is made to run at the lowest speed of 50MB/S for worst case scenarios on PS4 for assets streaming in real time, imagine the jump in fidelity in RDR3 using those 5GB/s.
 
There is moment in the game where you goes through multiples environnement very fast, they said it.

So what? Do they navigate through the full content within each one of those environments or just a very small part of of it? If you have 50% of the game in VRAM then why couldn't you have that small part of each environment in VRAM simultaneously?

If you want to have no game design limitation or to think about what you need to do the solution is or fast SSD and reasonnable amount of RAM 16GB UMA in console or 16, 24 or 32 GB on PC and an NVME SSD.

Except you absolutely are limited with a fast SSD and small RAM vs slower SSD and more RAM. Both allow different approaches that the other doesn't. Developers would still have to restrict themselves to the limits of 16GB of VRAM regardless of the ability to relatively quickly load in more data from memory. 16GB VRAM + 800GB fast SSD does not equate to 816GB VRAM.

SSD is not only an advantage for the player but for the game and level designer where they don't need to think of any technical problem. Here if they need something in memory it will takes maximum 1,5 seconds if they need to reload the full memory.

Which is far longer than if that data was already in VRAM thanks to a larger VRAM pool. i.e. a compromise. To give a very simple example, that's the difference between needing a transition portal, and no transition at all.

On design side the only limitation are the size of the game and the limitation of CPU and GPU power.

If this were true and the SDD were fast enough to remove ALL limitations then why do these consoles need VRAM at all? Or why not 8GB? Or less?
 
Last edited:
So what? Do they navigate through the full content within each one of those environments or just a very small part of of it? If you have 50% of the game in VRAM then why couldn't you have that small part of each environment in VRAM simultaneously?



Except you absolutely are limited with a fast SSD and small RAM vs slower SSD and more RAM. Both allow different approaches that the other doesn't. Developers would still have to restrict themselves to the limits of 16GB of VRAM regardless of the ability to relatively quickly load in more data from memory. 16GB VRAM + 800GB fast SSD does not equate to 816GB VRAM.



Which is far longer than if that data was already in VRAM thanks to a larger VRAM pool. i.e. a compromise. To give a very simple example, that's the difference between needing a transition portal, and no transition at all.



If this were true and the SDD were fast enough to remove ALL limitations then why do these consoles need VRAM at all? Or why not 8GB? Or less?



The emphasis above is mine because you're literally making my point for me there.

Because you think the GPU can render 816 GB of asset? lol There is a limitation to what the GPU can render. And the GPU never render 10 or 12 GB of Assets at the same time. There are some asset preload too. With virtualisation, this is much less, you need to have into memory 1 polygon per pixel and 1 texel per pixel for a few frames.

I suppose at the end of the game you will have a place where you can go to any portal and it loads the full level. Same what you are saying limit the game design if you can go anywhere and goes very fast into the level. The slow SSD will be too slow.

A fast SSD(velocity architecture or PS5 SSD or soon Direct Storage PC) allow anything from the design point of view if you want portal and superman level of speed combined you can do it without any design or graphics limitations.

What the ex ND guy told it was not the RAM the constraint but the fact you can't load LOD0 assets fast enough in RAM into an open world or a wide linear game that the GPU is able to render.
 
Last edited:
Also to remind, to say it with Andrews words (with a little change): "But we can't store all of the super detailed high res versions for all objs on the SSD at once"
The new limit for the current consoles will just be space on the SSD.
Another aspect is, the cost of content creation would go up.
But I guess for special cases, where the camera is in a fixed position (cutscenes) it will work just fine. But it might be better (for ssd space) to save them as movie file.
 
Last edited:
Also to remind, to say it with Andrews words (with a little change): "But we can't store all of the super detailed high res versions for all objs on the SSD at once"
The new lint for the current consoles will just be space on the SSD.
Another aspect is, the cost of content creation would go up.
But I guess for special cases, where the camera is in a fixed position (cutscenes) it will work just fine. But it might be better (for ssd space) to save them as movie file.

He created a company for assets placement using AI. Promethean AI. AI will be a major element in game creation. Same with realtine GI or have a much bigger geometry density or be able to directly work with Zbrush model in game engine.

He said in the twitter thread now the tools need to improve a lit because of budget constraint.
Also to remind, to say it with Andrews words (with a little change): "But we can't store all of the super detailed high res versions for all objs on the SSD at once"
The new lint for the current consoles will just be space on the SSD.
Another aspect is, the cost of content creation would go up.
But I guess for special cases, where the camera is in a fixed position (cutscenes) it will work just fine. But it might be better (for ssd space) to save them as movie file.

It will affect the variety of assets or for example the statue of UE5 level of details will be a hero asset. It doesn't say you will never see assets of this quality in a game but he expects common assets in the game to be less detailed.
 
Firstly how long would it take to fill the ram from a hdd.

To be clear, I'm not saying more RAM with an HDD would be better, just that it provides a different set of advantages and disadvantages. To fill 64GB of VRAM from an HDD would obviously take roughly 10 minutes but if that 64GB of RAM represents a full 50% of your game content than how much of it do you actually need to fill to start playing the game? For any game made this generation then obviously no more than about 25% maximum. A lesser amount of RAM with say a SATA SSD might be a better compromise. 32GB would take about 1 minute to will in that scenario but time to game could be much less. So you still lose initial load times vs the NVMe but you open up other options on account of having 32GB VRAM.

Secondly, streaming is also limited by the speed of the storage device.

Only partially. The more data you have in VRAM, the less you have to stream. With 50% of your game in VRAM then streaming requirements should be greatly lessened and much easier to predict.

So you used R&C as an example, given you can portal through to different worlds very quickly, how many worlds could you prefetch/stream in, given it's not a linear route you have to take.

From what I can see the portals to which you can transition are pre-defined. So the engine would load the environment on the other side of whatever portals you're closest. If it's not pre-defined then perhaps you could pre-cache the areas immediately on the other side of the portals for every environment. In R&C's case with 64GB VRAM you could basically pre-cache the entire game.

I'm highlighting the flaw that simply having more memory negates ssd.

I can see how things have got muddled with all the back and forth above but this definitely isn't what I'm arguing. My argument is kinda the opposite of that, that an SSD doesn't negate having more memory. i.e. fast IO with smaller memory and slower IO with more memory are different approaches each with their own advantages and disadvantages. One is not universally better than the other (except from a cost perspective perhaps).
 
To be clear, I'm not saying more RAM with an HDD would be better, just that it provides a different set of advantages and disadvantages. To fill 64GB of VRAM from an HDD would obviously take roughly 10 minutes but if that 64GB of RAM represents a full 50% of your game content than how much of it do you actually need to fill to start playing the game? For any game made this generation then obviously no more than about 25% maximum. A lesser amount of RAM with say a SATA SSD might be a better compromise. 32GB would take about 1 minute to will in that scenario but time to game could be much less. So you still lose initial load times vs the NVMe but you open up other options on account of having 32GB VRAM.



Only partially. The more data you have in VRAM, the less you have to stream. With 50% of your game in VRAM then streaming requirements should be greatly lessened and much easier to predict.



From what I can see the portals to which you can transition are pre-defined. So the engine would load the environment on the other side of whatever portals you're closest. If it's not pre-defined then perhaps you could pre-cache the areas immediately on the other side of the portals for every environment. In R&C's case with 64GB VRAM you could basically pre-cache the entire game.



I can see how things have got muddled with all the back and forth above but this definitely isn't what I'm arguing. My argument is kinda the opposite of that, that an SSD doesn't negate having more memory. i.e. fast IO with smaller memory and slower IO with more memory are different approaches each with their own advantages and disadvantages. One is not universally better than the other (except from a cost perspective perhaps).

Human doesn't like to wait and in a few years when all game will be made around NVME SSD on consoles and PC. It will be a problem to wait one minutes or more for load a game.

The only constraint is 1 second transition, this is ok.

IXNkd9.gif


The gif is half speed

Here it means a few seconds to load a game and portal game are 1 second or a few seconds at maximum depending of SSD speed. This is a reasonnable amount of time.

If matt from era told the truth, it seems Sony was impressed by Quick Resume functionnality of Xbox Series and they are currently working on implementing their own version. It will be even faster for the initial load of a game than currently on PS5 and it can solve returnal save problem.
 
Last edited:
@chris1515 I think the issue is you are arguing with PC elitists who are happy to just spend as much as it takes and use pure grunt to get an upgrade. What Sony has done for the cost is simply amazing, and what MS are doing will bring similar results for Xbox and PC from what I can gather.

No need to pay big bucks for more RAM - the problem the PC guys aren’t considering is that almost no-one would develop a game for such a tiny minority or gamers. So their whole argument is flawed and around scenarios which won’t be happening just so they can some how prove the SSD isn’t a big step forward for games, graphics and developers.
 
@chris1515 I think the issue is you are arguing with PC elitists who are happy to just spend as much as it takes and use pure grunt to get an upgrade. What Sony has done for the cost is simply amazing, and what MS are doing will bring similar results for Xbox and PC from what I can gather.

No need to pay big bucks for more RAM - the problem the PC guys aren’t considering is that almost no-one would develop a game for such a tiny minority or gamers. So their whole argument is flawed and around scenarios which won’t be happening just so they can some how prove the SSD isn’t a big step forward for games, graphics and developers.

https://www.resetera.com/threads/ratchet-clank-rift-apart-–-state-of-play-4k-gameplay-showcase.417906/post-63959019

And the transition was reduced and it seems everything linked to the SSD will be slightly faster in the final version from the community manager. Transition are less than 1.4 seconds.

And this isn't about the speed, either, although time-in-portal has been reduced from about 2.6 seconds on the transitions I checked on the video from 8 months ago to about 1.4 seconds on the transition from today's video.

At the end when the player finish the game, I am not even sure the added time portal or loading the game on a fast SSD will take longer than the added time of loading each time the game with more RAM and slower SSD with long initial load.
 
@chris1515 I think the issue is you are arguing with PC elitists who are happy to just spend as much as it takes and use pure grunt to get an upgrade. What Sony has done for the cost is simply amazing, and what MS are doing will bring similar results for Xbox and PC from what I can gather.

No need to pay big bucks for more RAM - the problem the PC guys aren’t considering is that almost no-one would develop a game for such a tiny minority or gamers. So their whole argument is flawed and around scenarios which won’t be happening just so they can some how prove the SSD isn’t a big step forward for games, graphics and developers.
Yeah I'm also having trouble with the "just put in 64GB of RAM silly!" kind of arguments. Obviously that's an option, but one that makes no sense for consoles and basically never going to happen even in the next gen, and no PC game will require that amount anyway, so I'm not sure I understand the point of this line of discussion.
 
@chris1515 I think the issue is you are arguing with PC elitists who are happy to just spend as much as it takes and use pure grunt to get an upgrade. What Sony has done for the cost is simply amazing, and what MS are doing will bring similar results for Xbox and PC from what I can gather.

No need to pay big bucks for more RAM - the problem the PC guys aren’t considering is that almost no-one would develop a game for such a tiny minority or gamers. So their whole argument is flawed and around scenarios which won’t be happening just so they can some how prove the SSD isn’t a big step forward for games, graphics and developers.
Yea, this is true.

There's a sweet spot and balance for each console development phase, depending on what the current tech climate is like... and there's also a sweet spot and balance in PC design as well. Certain things will scale up on the PC side the more you throw at it... but on the game design side, you're always going to be restricted by what the dev/publisher determines is the optimal range of hardware to target.

The new focus on I/O throughput on the consoles has already had a massive influence on the future of PC gaming. OS improvements, a new APIs to improve storage to VRAM access, GPU based asset decompression... all which allow developers the ability to design things smarter, and better utilize the hardware that's already there.
 
Yea, this is true.

There's a sweet spot and balance for each console development phase, depending on what the current tech climate is like... and there's also a sweet spot and balance in PC design as well. Certain things will scale up on the PC side the more you throw at it... but on the game design side, you're always going to be restricted by what the dev/publisher determines is the optimal range of hardware to target.

The new focus on I/O throughput on the consoles has already had a massive influence on the future of PC gaming. OS improvements, a new APIs to improve storage to VRAM access, GPU based asset decompression... all which allow developers the ability to design things smarter, and better utilize the hardware that's already there.

And Nvidia and AMD speak about adding hardware decompressor on next generation PC GPU. It will be smarter than use some GPU compute power. This will not change everything will work faster on PC. PCIE 5 SSD will arrive in 2022 or 2023.

Future PC will have Faster GPU, Faster CPU, Faster SSD and faster hardware decompressor than consoles. It means better framerate, better resolution, better texture filtering, better lighting, lower than 1 second portal transition and faster loading than console as always.

From an asset perspective virtual geometry will help to reach the 1 polygon per pixel density for the moment only for rigid geometry. We begin to see new way to represent hair or fur with frosbite or Insomniac engine needing subpixel precision and special case AA with analytical AA. Maybe we will see some new way to represent grass and leave too which need supixel precision too.

And in the future iteration of Nanite, they think they can have skinned geometry, transparent material and support tesselation and displacement this is exciting too.

Maybe when all geometry innovation will merge together at the end of generation we will have nearly no visible pop in.

We can reach 1 texel per pixel density for texture.
https://wccftech.com/playstation-5-texel-density-specs-exciting/

We worked a lot in order to use the highest-resolution textures as possible also on PS4; nonetheless, PlayStation 5 will allow us to use an incredible Texel density, up to 4096px/m – that means the visual will be fully detailed also in higher resolutions. It’s one of the most important advances in visual capacity that we were waiting for.

Lighting will continue to see improvement maybe some hybrid lighting engine with sdf or voxel mixed with triangle based raytracing.

Another area of progress will be fluid simulation assited by IA.
https://blog.siggraph.org/2019/04/p...learning-for-real-time-fluid-simulation.html/

SIGGRAPH: What are you currently focused on improving about Physics Forests and working toward?

LL: We are developing a general framework for the simulation of multiple physics phenomena built around the machine-learning paradigm. Currently, we are focusing on rigid bodies, fracture, and destruction, aiming to achieve similar speed-up over existing methods as we managed to get for fluid simulations. Concurrently, we develop plugins for existing SFX frameworks and game engines.

Same with incredible tools for animation or improve motion matching technology using AI on Ubi side.

Or use motion synthesis like EA R&D.

This generation is very exciting. Same if they continue to design the CPU side of thing around the possibility to have a 60 fps mode, it will cost lower less on GPU side for PC player to play at 144 hz for example.
 
Last edited:
And Nvidia and AMD speak about adding hardware decompressor on next generation PC GPU. It will be smarter than use some GPU compute power. This will not change everything will work faster on PC. PCIE 5 SSD will arrive in 2022 or 2023.

Future PC will have Faster GPU, Faster CPU, Faster SSD and faster hardware decompressor than consoles. It means better framerate, better resolution and better lighting than console as always.

Yea, future GPUs will have hardware decompression built right in as well.

In the meantime though, GPU compute based decompression will allow PC to keep up in this transition phase though. It's also kind of lucky that with the solution MS are putting into place, there's actually a sizeable amount of hardware out there that already can support it. Then, when dedicated hardware based decompression comes to the PC, having it on the GPU is a plus because GPUs are the most commonly upgraded PC component... and the API just detects the dedicated decompression block and utilizes it instead of the GPU.
 
One is not universally better than the other (except from a cost perspective perhaps).

I think the issue is your arguing with playstation fans (usually younger people) who have to defend their 5700XT class of performance gpu with weak ray tracing performance and yet to be seen upscaling tech that rivals whats to be found in more expensive platforms. The hard drive isnt going to process and render to make up for a 9 to 10TF GPU, neither will it process ray tracing or upscale the image.

The SSD is a big step forwards since consoles never got one almost 8 years ago, but the rdna1/2 gpu and 8 core zen2 gpu is a much larger gap between it and the 7870 class gpu we had before.

IF the ssd in the PS5 could render and process/compute, then where is it? So far, we have nothing, no more then faster loading and streaming. Unless ofcourse, the SSD in the ps5 contains some sort of specialized hardware which assists in compute and rendering for the GPU?
 
I think the issue is your arguing with playstation fans (usually younger people) who have to defend their 5700XT class of performance gpu with weak ray tracing performance and yet to be seen upscaling tech that rivals whats to be found in more expensive platforms. The hard drive isnt going to process and render to make up for a 9 to 10TF GPU, neither will it process ray tracing or upscale the image.

The SSD is a big step forwards since consoles never got one almost 8 years ago, but the rdna1/2 gpu and 8 core zen2 gpu is a much larger gap between it and the 7870 class gpu we had before.

IF the ssd in the PS5 could render and process/compute, then where is it? So far, we have nothing, no more then faster loading and streaming. Unless ofcourse, the SSD in the ps5 contains some sort of specialized hardware which assists in compute and rendering for the GPU?
VRAM constraints exists and when hit will always cause stuttering and lower frame rates.

Like VRAM, SSD, cannot in itself directly affect rendering, however they are still bottlenecks to rendering. As developers continually push for higher quality assets and resolutions, your system will become VRAM bound. There is no amount of VRAM a GPU has sensibly that can scale to a just in time system of a high speed SSD. When people talk about graphics, or graphical fidelity, it is more than just the rendering pipeline, people look at the artistry and quality of the assets, the geometric detail, the texture details etc. Those are all things will be made ahead of time stored on the SSD. Without some method to stream in assets just in time, you will run into a VRAM limitation and VRAM limitations will result in poor performance or lowered resolution to fit within VRAM constraints. This is no different than not having enough compute/bandwidth to support resolution, only this time we're looking at the footprint of assets.

The TLDR is quite frankly, SSD speed is the critical component that we needed to move rendering forward. Everything prior to this generation has been based around slower 5400-7200rpm rotational speeds as a baseline requirement to play, so every single GPU under the sun with all sorts of VRAM configurations worked.

I think there is a lot of weird posturing happening here in this thread, and I'm confused as to where this is going. Cost effectiveness must always be brought into the equation otherwise, any problem is solvable with infinite resources. Having 64GB of system memory to act as a buffer to JIT to GPU memory is not a reasonably cost effective choice, and it's still not a better solution than having a super fast NVME feed directly to the GPU. It is a solution, but it's not a better one.

The games will come in time. Re-writing entire streaming pipelines and developing assets to support such large pipelines cost both time and money. This wasn't going to happen overnight. Character and environment density will increase with each new wave of games, but you need a new pipeline to support that level of geometry culling as well.

So all of it working together is what is going to get us to next generation graphics, until then, we're still just looking at old generation games with some extra stuff slapped on. Give it time, and I think you'll see why all of these features combined are necessary to bring about the next generation of graphics. No sole feature alone could do it. Bottlenecks throughout the entire chain need to be widened to break the next barrier.
 
VRAM constraints exists and when hit will always cause stuttering and lower frame rates.

Like VRAM, SSD, cannot in itself directly affect rendering, however they are still bottlenecks to rendering. As developers continually push for higher quality assets and resolutions, your system will become VRAM bound. There is no amount of VRAM a GPU has sensibly that can scale to a just in time system of a high speed SSD. When people talk about graphics, or graphical fidelity, it is more than just the rendering pipeline, people look at the artistry and quality of the assets, the geometric detail, the texture details etc. Those are all things will be made ahead of time stored on the SSD. Without some method to stream in assets just in time, you will run into a VRAM limitation and VRAM limitations will result in poor performance or lowered resolution to fit within VRAM constraints. This is no different than not having enough compute/bandwidth to support resolution, only this time we're looking at the footprint of assets.

The TLDR is quite frankly, SSD speed is the critical component that we needed to move rendering forward. Everything prior to this generation has been based around slower 5400-7200rpm rotational speeds as a baseline requirement to play, so every single GPU under the sun with all sorts of VRAM configurations worked.

I think there is a lot of weird posturing happening here in this thread, and I'm confused as to where this is going. Cost effectiveness must always be brought into the equation otherwise, any problem is solvable with infinite resources. Having 64GB of system memory to act as a buffer to JIT to GPU memory is not a reasonably cost effective choice, and it's still not a better solution than having a super fast NVME feed directly to the GPU. It is a solution, but it's not a better one.

The games will come in time. Re-writing entire streaming pipelines and developing assets to support such large pipelines cost both time and money. This wasn't going to happen overnight. Character and environment density will increase with each new wave of games, but you need a new pipeline to support that level of geometry culling as well.

So all of it working together is what is going to get us to next generation graphics, until then, we're still just looking at old generation games with some extra stuff slapped on. Give it time, and I think you'll see why all of these features combined are necessary to bring about the next generation of graphics. No sole feature alone could do it.

So in short, GPU compute increases are going to be just as important as IO ones. You need both along with the other components of a system. Bandwith is very important but so is compute capability. In essence, its the same thing for RDNA2 architectures and modern CPUs, games need to start taking advantage of all that aswell and be re-written to take use of all that.

A 20TF GPU with more advanced RT, ML etc would facilitate the SSD nvme tech even more so. Im not saying anyone made the wrong choices, its the best balance they could find, going with 64gb ram was never even an option. It can be argued that the SSD's also somewhat have to assist the 16GB ram which isnt a whole lot to begin with (a rather small increase).
But saying the SSD/IO is providing the largest gap as compared to the 2013 consoles, it can argued thats really not the case (and something DF didnt share either). The rdna gpu along with the CPU obviously are huge upgrades, besides, its impossible to compare TF for TF as architectures get more efficient and new features make their debut (like RT, mesh shading/GE, etc etc).

Games still having to show it..... Yes, we will have to wait and see, so far, nothing, aside from a tech demo (which is looking great, but not up to what some hyped before). Also, UE tech demos never materialise, and yes, UE tech demos before have been running on actual hardware, playable demos.

Tech demos and games remain two different things.
 
So in short, GPU compute increases are going to be just as important as IO ones. You need both along with the other components of a system. Bandwith is very important but so is compute capability. In essence, its the same thing for RDNA2 architectures and modern CPUs, games need to start taking advantage of all that aswell and be re-written to take use of all that.

A 20TF GPU with more advanced RT, ML etc would facilitate the SSD nvme tech even more so. Im not saying anyone made the wrong choices, its the best balance they could find, going with 64gb ram was never even an option. It can be argued that the SSD's also somewhat have to assist the 16GB ram which isnt a whole lot to begin with (a rather small increase).
But saying the SSD/IO is providing the largest gap as compared to the 2013 consoles, it can argued thats really not the case (and something DF didnt share either). The rdna gpu along with the CPU obviously are huge upgrades, besides, its impossible to compare TF for TF as architectures get more efficient and new features make their debut (like RT, mesh shading/GE, etc etc).

Games still having to show it..... Yes, we will have to wait and see, so far, nothing, aside from a tech demo (which is looking great, but not up to what some hyped before). Also, UE tech demos never materialise, and yes, UE tech demos before have been running on actual hardware, playable demos.

Tech demos and games remain two different things.

Arguably, the biggest largest gap from 2013 consoles is SSD performance. They moved from 50MB spinning platters to 2.5 and 5.5 GB/s, compressed values even further. It's a 50x -100x performance differential. Compute only 5x, bandwidth only 2.5x. CPU maybe 8X max?

The biggest barriers for graphics going into next generation were the SSD, CPU and then the GPU the feature set, then compute and bandwidth. Probably in that order.

I think more people should spend some more time with these engines before criticizing what can and what can not be done on hardware. I think there is an huge chicken or the egg type situation we are looking at here that very few addressed with respect to the UE5 demo. The UE5 engine itself runs on PC hardware, typically when we build levels we run the game in the editor. So it's clear the hardware there to run it, was sufficient to run the game in the editor, at least enough for you to do your job. But that is exactly why the claim that only PS5 could do it is poor, but also the claim that PS5 isn't doing anything special is also poor. The editors don't run on PS5 hardware, the engine is responsible to sending it to the kit to see what it looks like after we're done building. That means these developers can't design something that would be streaming data in and out of memory in the editor like how PS5 is supposed to run. So they'll design something that will run on PC, because that's all their editor can handle. I don't know if that makes sense, but here is lies the crux of the issue. Games are made on PC and then sent to console for testing. So if you design somethign that runs absolutely horrendous on PC, but knowing it would run on PS5... how do you work this exactly?

Typically this is why development hardware is usually several magnitudes beyond what's on console, but the hard drive streaming problem is a little awkward. Because we don't have directstorage APIs resolved yet, and we don't have GPUs with huge amounts of VRAM. So you're left to get the fastest NVME drive possible and perhaps bulk up with 64BG of VRAM, and maybe then your editor will work as to how it would work on PS5, but you've got this really complex job now of downgrading the game to fit within < 16GB of total memory.

It won't be easy this generation to hit the limit of the streaming capabilities of these hard drives because the tools haven't been designed to do so. Certainly not within the first 6 months.

I assure you, there is a very specific reason why everything in the UE5 demo, as impressive as it was - was using so much instancing and repetition. It is clear there are probably hard limits to what the editor can handle on PC that hasn't been talked about either.
 
So what? Do they navigate through the full content within each one of those environments or just a very small part of of it? If you have 50% of the game in VRAM then why couldn't you have that small part of each environment in VRAM simultaneously?


If this is true I think this should help a lot with the portal mechanic.If the game is loading from the SSD on the fly and only working on the RAM the absolute necessary, then the amount of data it needs to load and render is very small. It'll not lead "the other level", it'll load just what is in front of your character's eyes when it gets there.
Wonder if this is really all true, that the SSD is really being used this way.
 
I assure you, there is a very specific reason why everything in the UE5 demo, as impressive as it was - was using so much instancing and repetition. It is clear there are probably hard limits to what the editor can handle on PC that hasn't been talked about either.

Well, it IS just a tech demo after all... Budgets and time only allow for so much.
 
Back
Top