Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Why ?

DDR 4 3200 32GB ($125) https://www.amazon.com/Crucial-Ballistix-Desktop-Gaming-BL2K16G32C16U4B/dp/B083TSLDF2/ref=sr_1_9?crid=1I2N9U2AR7S0O&dchild=1&keywords=ddr4+3200+32gb&qid=1609880332&sprefix=ddr+4+3200,aps,164&sr=8-9

DDR 4 3200 64gigs ($200) https://www.amazon.com/TEAMGROUP-3200MHz-PC4-25600-Unbuffered-Computer/dp/B086X24BZY/ref=sr_1_9?crid=RZ7Z81LRP6ZX&dchild=1&keywords=ddr4+3200+64gb&qid=1609880397&sprefix=ddr4+3200+64,aps,156&sr=8-9


Gen 4 nvme drives are what

500GB $110 https://www.amazon.com/Corsair-Force-Gen-4-MP600-500GB/dp/B07WS1BRX4/ref=sr_1_3?crid=ZDTRVDXCA0BU&dchild=1&keywords=pci+gen+4+m.2+nvme+ssd&qid=1609880485&sprefix=pci-4+nvme,aps,170&sr=8-3

You can change from the 500GB to the 2TB on that link and its $379 bucks the 1TB is out of stock so i can't tell you the price.



I mean i can add 32gigs of ram vs just 500GB of storage. If I already have a sata ssd or even a slower nvme drive and I unreal will make use of the extra ram too , wouldn't it be better to go with the extra ram?

The 1 TB is around 150 to 180 dollars depending of the price of the NAND or some discount.

The data is streamed based on current view and this is virtualized geometry. This is exactly what you want to avoid load data in advance what will you load into mip 1 or 2 of UE 5 technology because you are unable to load the LOD0 as fast as possible and this is sad if the GPU is able to render the mip 0 LOD. It means too load the full model because you don't know what you will render on screen. And Direct Storage is their to avoid having to copy the data between RAM and VRAM.

They talked about mip level of geometry, it means you will have less polygons because your storage is not fast enough to load the data. In the demo the streaming pool is only 768 MB because of fast storage but if you begin to load full model basically the VRAM will maybe be a problem.

The level of detail scale with the storage speed.

EDIT:
https://www.amazon.com/Sabrent-Inte...+Nvme+PCIe+4.0+M.2+2280&qid=1609881697&sr=8-1

This one is 169 dollars a 1TB PCIE 4
 
The 1 TB is around 150 to 180 dollars depending of the price of the NAND or some discount.

The data is streamed based on current view and this is virtualized geometry. This is exactly what you want to avoid load data in advance what will you load into mip 1 or 2 of UE 5 technology because you are unable to load the LOD0 as fast as possible and this is sad if the GPU is able to render the mip 0 LOD. It means too load the full model because you don't know what you will render on screen. And Direct Storage is their to avoid having to copy the data between RAM and VRAM.

They talked about mip level of geometry, it means you will have less polygons because your storage is not fast enough to load the data. In the demo the streaming pool is only 768 MB because of fast storage but if you begin to load full model basically the VRAM will maybe be a problem.

The level of detail scale with the storage speed.
how big will the game be ?

If you can load 32 gigs of data into ram and then stream from your ram pool + slower hard drive it should offer a better experiance than from a single faster (but still slower than ram) ssd.
 
how big will the game be ?

If you can load 32 gigs of data into ram and then stream from your ram pool + slower hard drive it should offer a better experiance than from a single faster (but still slower than ram) ssd.

I suppose devs will try to limit game size to 200 GB. This will not going faster you need to load the data inside the RAM too from your slow drive and you need to do all the Nanite virtualisation work inside memory. I am not even sure this is possible, the compression between the disk and the RAM is different. Here it means optimize the RAM usage. Better wait details before talking about a scenery maybe impossible to reproduce.

8wl1rua.png


Unreal Engine 5 is optimised around SSD and fast and low latency storage, virtual geometry, virtual shadow maps, virtual texturing and having as much as possible in RAM only what is visible to render on screen. I am not sure storage are low latency enough for this but this a much better scenario than before the engine was optimised around SSD.

After games assets won't be the same quality than the demo if it was the case I suppose minimum size for games would be at least 5 TB maybe 10 TB. The funny things we have fast storage but like cartdrige not big enough to push the assets quality the hardware is able to render.

EDIT: And I suppose PCIE 4 SSD is not mandatory, something as fast as the Xbox Series SSD is probably enough ;)

And it is cheaper.
 
Last edited:
Any of its loading would be as fast as the hardware allows. If it's loading, then it's loading somewhere between the upper and lower ranges - those values are it's normal range (using medical terminology).

Whether the engine *needs* to load it that quickly would surely be determined by the game and the need to move quickly through detailed environments.

Whether this particular demo maxes out the PS5's IO can be argued until the end of days, but it appears that the engine *allows* for a game to stream at the PS5's transfer rate and that it wouldn't be possible on something with a slower transfer speed (without compromises). Makes logical sense, surely?

Yes I agree. This is why I don't understand some peoples seeming unwillingness to accept that the demo might be possible on a current PC. That's not taking anything away from the PS5. Everyone knows the PS5 still has a much faster IO system than the laptop in question and even faster in many ways than any PC available today until Direct Storage arrives. If a developer wanted to they could no doubt make a game/demo that fully utilises that speed, I don't understand the negativity around the suggestion that this demo isn't it.
 
Unreal Engine 5 is optimised around SSD and fast and low latency storage, virtual geometry, virtual shadow maps, virtual texturing and having as much as possible in RAM only what is visible to render on screen. I am not sure storage are low latency enough for this but this a much better scenario than before the engine was optimised around SSD.

After games assets won't be the same quality than the demo if it was the case I suppose minimum size for games would be at least 5 TB maybe 10 TB. The funny things we have fast storage but like cartdrige not big enough to push the assets quality the hardware is able to render.
There's no streaming in assets from game that weighs 10TB -- that would be impossible on a ps5 ssd too. You think the demo was shuffling around ~10 gigs of polygons every frame? That's ridiculous.

This is completely misunderstanding the whole point of nanite. Nanite will either not work at all, or it will deliver a compressed format that brings all of the detail of your "5TB" meshes into a scaleable, deliverable size, and in a way that can be put onto the screen every frame at 30-60 fps by console gpus and cpus. (Which, I think, will be the bottleneck -- I'd bet a regular m.2 drive on a pc with a super high end i9 and 3080 will run ue5 much better than the ps5)

"Fast" ssds are necessary, but they won't be the main bottleneck in most games and "Fast" doesnt mean ps5 fast.
 
So you're suggesting we ignore parts of the interview because they don't align with your interpretations?
Yes the quotes I selected are "ignoring parts of the interview because they don't align with my interpretations", but the quotes you selected are the "representation of the truth".


I'm not sure why you're trying to dismiss this as a "Chinese guy in some livestream" rather than acknowledging that this was an Epic engineer with direct access to the Demo in an official EPIC China livestream and Q&A.
Yes he had access to the runtime demo in a windows PC. Which is why he ran a video at the livestream.
Makes complete sense.



Here is how PCGamer describes the conversation.

"I couldn't get any exact specifications from Epic, but on a conference call earlier this week I asked how an RTX 2070 Super would handle the demo, and Epic Games chief technical officer Kim Libreri said that it should be able to get "pretty good" performance. But aside from a fancy GPU, you'll need some fast storage if you want to see the level of detail shown in the demo video."
"Pretty good performance" and "some fast storage" doesn't tell you it'd run faster or even that the current storage solutions for the PC would be capable.
But it's nice that we're already getting away from the muh laptop runs the demo faster than the consoles nonsense.



One of the funnier things about this particular brand of fanboy nonsense is it could easily have gone the other way -- were microsoft or nvidia or somebody partnering for the nanite reveal demo they would have talked up different features, and we would have had fanboys banging on about how the (almost surely meshlet-like) geometry streaming approach is "only possible with dx12 ultimate" until it finally comes out that it runs on any hardware you can do compute shaders on.

If you don't know or aren't interested in learning how tech works it's easy to imagine that any feature being marketed is 100% necessary for a given game or demo to run.


You could prevent this immature crap and trolling nonsense of throwing fanboy accusations every time someone writes something you disagree with.

There are tons of statements from Epic developers saying the new consoles' (plural, so SeriesX / S included!) I/O subsystems with dedicated I/O accelerators for lower latency and higher effective throughput are crucial to making the UE5 tech demo possible,

Yes, I even fucking bolded that sentence in the original post because I just knew someone would try to troll this down to fanboy warring.
 
There's no streaming in assets from game that weighs 10TB -- that would be impossible on a ps5 ssd too. You think the demo was shuffling around ~10 gigs of polygons every frame? That's ridiculous.

This is completely misunderstanding the whole point of nanite. Nanite will either not work at all, or it will deliver a compressed format that brings all of the detail of your "5TB" meshes into a scaleable, deliverable size, and in a way that can be put onto the screen every frame at 30-60 fps by console gpus and cpus. (Which, I think, will be the bottleneck -- I'd bet a regular m.2 drive on a pc with a super high end i9 and 3080 will run ue5 much better than the ps5)

"Fast" ssds are necessary, but they won't be the main bottleneck in most games and "Fast" doesnt mean ps5 fast.

Do you read my post? I quote my own post
After games assets won't be the same quality than the demo if it was the case I suppose minimum size for games would be at least 5 TB maybe 10 TB. The funny things we have fast storage but like cartdrige not big enough to push the assets quality the hardware is able to render.

I said if the size of the game was not limited by the storage size. If we could have 100 TB SSD inside a PC or a console, it would not be a problem at all. And a 100 TB SSD doesn't exist for the moment.;-) At least on consumer side, I have no idea on professional side and if it exists I can't imagine the price. They said the size of demo assets is too big for games but realistic for TV show for example. They spoke about reduce the asset quality by a mip level or two. It seems you don't even follow what the devs told about the demo and the technology.

You don't even understand what is virtualized geometry, you need the full asset on the storage. This is not some geometry generated at runtime. Nanite is not compatible with tesselation and displacement for the moment.

Ec54FYAU8AAdc13


I am sure of it too, when the demo will release on PC, Directstorage will probably be available. And I don't think a PS5 SSD was mandatory, a XSX SSD is probably enough and probably slower is ok too this is a very linear demo not an openworld with this asset quality. They said they can run the last part of the demo on a slower SSD with special care about the disk layout (let the data being sequential on the SSD, it works very well on HDD after this is not very realistic on an open world where you will have to duplicate huge data on the tiny SSD). We are back to square one, this is better to have faster SSD like XSX or PS5 or PC when DirectStorage will be available. All this stuff with taking care of data layout was one of the reason Mark Cerny told it was important to have a fast SSD.

EDIT:
Size of games is so problematic they talked about the possibility to stream assets on the SSD by level from the Cloud. If they reach the targeted 60 fps on next-generation consoles for the demo without compromise on quality*, it is sad because we will never see this inside a game because SSDs are too little and not because the consoles aren't powerful enough to render assets of this quality.

*The demo can run at 60 fps probably with a lighting downgrade, Nanite is 60 fps ready but not Lumen but the demo was made 18 months before the planned release. And for example the performance of Nanite were better than into the slide, it takes less than 4.5 ms to run on PS5, it was told by Brian Karis and multiple Epic engineers told this is very early technology and they can optimize to keep the same quality but nearly double the framerate.


EDIT: On Quixel megascans site, there are some of the rock assets used inside the demo, the size is between 200 MB and 300 MB for each assets.
 
Last edited:
it would suggest that there aren't any points in the demo that would require even burst speeds as high as what the PS5 is capable of.

Not many have 7gb/s nvme drives on pc yet i think, but adaption might get faster then one could think when prices adjust abit.

Anyway, by the time actual UE5 games see the light, on pc there'd be nvme (with direct storage) solutions a magnitude faster then the consoles.
 
out of interest does anyone know how unreal licensing works with something like gamepass? since as far as I know the unreal licensing is on a revenue basis, which would be hard to determine from a subscription service.

It cant be the case that Microsoft gets to use unreal for 'free' if everythings just on gamepass, or is there a clause that I haven't seen?
 
Not many have 7gb/s nvme drives on pc yet i think, but adaption might get faster then one could think when prices adjust abit.

Anyway, by the time actual UE5 games see the light, on pc there'd be nvme (with direct storage) solutions a magnitude faster then the consoles.
You expect PC SSDs to offer 55-90 GB/s when UE5 games arrive?
 
how big will the game be ?

If you can load 32 gigs of data into ram and then stream from your ram pool + slower hard drive it should offer a better experiance than from a single faster (but still slower than ram) ssd.

Agreed. VRAM is backed by system RAM on a PC and a mid-range to high-end PC can have anywhere from 24GB-56GB of RAM. A big chunk will be the slow system variety but that slow RAM is still many times faster than the consoles’ SSDs.

That slow SSD on the PC is going to see less traffic over the course of multiple frames being rendered so SDD bandwidth becomes less important.
 
Last edited:
You don't even understand what is virtualized geometry, you need the full asset on the storage. This is not some geometry generated at runtime. Nanite is not compatible with tesselation and displacement for the moment.

Yes, but the trick here is what is "the whole asset" -- something like geometry textures has very different size requirements than a .obj file. (we know for a fact nanite isn't geometry textures, their lead dev said as much, but I'd bet it's something along the lines in terms of being a compressed representation that has different rendering properties than a raw mesh.) I'd speculate there's an absolutely 0% chance that nanite is scanning through entire raw megascans on disc then doing whatever magic to get the parts it needs -- I believe nanite is going to be as much about the format of the meshes as the tech to render them. Even more traditional pipelines are getting in to more heavily pre-processing meshes (such as dx12u's meshlets approach), i think it's a safe bet that nanite meshes are going to be pretty different from regular geometry files.

I am sure of it too, when the demo will release on PC, Directstorage will probably be available

Yes - my bet here is: it will run without fine without directstorage (on suitably fast cpus and gpus). We'll be able to find out, since this demo or technically similar demos will run on PCs that are incompatible with directstorage

EDIT:
*The demo can run at 60 fps probably with a lighting downgrade, Nanite is 60 fps ready but not Lumen but the demo was made 18 months before the planned release. And for example the performance of Nanite were better than into the slide, it takes less than 4.5 ms to run on PS5, it was told by Brian Karis and multiple Epic engineers told this is very early technology and they can optimize to keep the same quality but nearly double the framerate.
See, this is a super important datapoint.I'm speculating, but imo the 4.5ms a frame is another data point that the ssd is nowhere near the bottleneck here -- there's no way you could stream in all that raw geo every frame in 4.5ms on a 5.5gb/s hd. Something else is going on! Most likely something cpu and gpu intensive. (Decompressing and reconstructing some kind of data structure friendly to this sort of rendering)

EDIT: On Quixel megascans site, there are some of the rock assets used inside the demo, the size is between 200 MB and 300 MB for each assets.

As above in this post, I agree the source assets (what goes in to the builds, not what goes onto the disk) are big.

Not many have 7gb/s nvme drives on pc yet i think, but adaption might get faster then one could think when prices adjust abit.

Anyway, by the time actual UE5 games see the light, on pc there'd be nvme (with direct storage) solutions a magnitude faster then the consoles.

I don't think this is true -- ue5 (or some preview builds at least) is coming out this year right? As soon as it's out, some indie dev will slap every quixel model into a map and make a build.
 
Agreed. VRAM is backed by system RAM on a PC and a mid-range to high-end PC can have anywhere from 24GB-56GB of RAM. A big chunk will be the slow system variety but that slow RAM is still many times faster than the consoles’ SSDs.

That slow SSD on the PC is going to see less traffic over the course of multiple frames being rendered so SDD bandwidth becomes less important.

yea thats my point. I can get 64gigs of ram for my pc for the price of a 1TB nvme ssd. Yes I can't fit a full game in 64gigs. I am sure after windows has its way I might be limited to 48 or 56 gigs of ram. But that is still 48 gigs of ram that is offering huge amounts of bandwidth along with the 8-32gigs of ram on the graphics card.

Direct IO will not only improve bandwidth through storage but also ram.

don't forget that next year we should see AM5 with ddr 5 and I am sure intel will be right on that train also.

DDR5 will up the Max udimm size from 32 to 128gigs. So we could be looking at the average gamer pc actually having upwards of 128 gigs of ram.

If as @chris1515 says games will cap out at 200gigs then you could literally fit half the game in ram. So how would an 8GB/s ssd be faster ?

And thats up to 128 gigs per dimm. So if you have a board with 4 dimm slots like today you could do 512 gigs of ram.

Now of course that will be crazy expensive. But still there are gamers willing to spend the money.
 
Everyone was being quite grown up and trying to determine how the engine functioned.

Do you have a screenshot of your system specs? It might actually be fun to see how it compares over the course of the generation, but please if you're going to do it be honest and upfront about your PC's specs. Hence why a screenshot would be a good start. :)

If you distrust me already, no amount of screenshots will alleviate that /shrugs

But I will play for "fun" (at work atm though)

ASUS Prime X299 Edition 30
Intel Core i9-10900X (X not K version due to PCIe lanes)
64 GB Corsair Vengeance
NVIDIA RTX 3090
Corsair MP600 NVMe - 2TB
6 x 2 TB HPE Enterprise SSD's (read intensive)

Due to be updated in ~2 years....GPU will be upgraded next generation.

I will be more interested in the performance jump to my next PC, than how a console demo will run TBH /shrugs
 
This isn't relevant because as you know, unless the tech is in every console, developers can't target it as a baseline, let alone major cross platform middleware developers.
That's a weird thing to say because the need for a fast SSD is not a requirement for using Unreal Engine 5. UE5 is not an engine only for fast-flash devices,

I don't understand why you are so unbelievably focussed on the SSD, which has long-standing been an option for PCs where RAID SSD should theoretically eliminate loading times but don't because the fundamental I/O framework is the choke point. This is what Direct Storage is aimed to improving.

And by far the biggest change to the IO systems in the new consoles is the move from a spinning disk to an NVMe SSD. Yes there are other elements (many of which like hardware decompressors were already in the last gen consoles), but for the most part these are about removing the associated overhead on the CPU of the very fast data transfers enabled by the SSD's.

The underlying I/O as administered by the OS in PCs and consoles is already light on CPU usage. The CPU hit comes by having game data stored in .PAK (zip) files and having to load them them in memory for the CPU to pick apart the data, decompress anything for the CPU into main memory and sometimes decompress and send data for the GPU over the PCIE bus on PC. That's your CPU usage which Microsoft cannot fix with Direct Storage because this is just how games are written.

And at no point has anyone from EPIC ever stated that the UE5 demo maxes out the PS5's or even XSX's sustained data transfer speed. In fact, the Epic China engineer reportedly said the exact opposite.
I'm not claimed this. :???:

Sure, reducing IO CPU overhead is a great thing. But no-one's claimed it's a requirement for the UE5 demo provided you have a good CPU.
I've not claimed this either this. You seem to be posting on the basis that the improvements are about reducing the CPU overheard of I/O whereas this isn't the case. CPU driven I/O has not been a thing for literally generations. Some improvements will come of re-arranging data that is does not need to be picked apart by the CPU first (i.e. file check in) but that's an option now.

Microsoft's Direct Storage API can't change this, games are written how they are written.

How could Tim Sweeney say the demo can't run on PC when directly asked?

This was your assertion, not me:

"Conversely why has Tim Sweeney consistently avoided saying that the demo won't run on PC, even when asked directly?

You're literally arguing with yourself here :mrgreen: and at this point I really an out because it feels like you are just arguing with people for the sake of arguing and you're not interested in learning or discussing anything.
 
If you distrust me already, no amount of screenshots will alleviate that /shrugs

But I will play for "fun" (at work atm though)

ASUS Prime X299 Edition 30 £570
Intel Core i9-10900X (X not K version due to PCIe lanes) £560
64 GB Corsair Vengeance £300
NVIDIA RTX 3090 £1400
Corsair MP600 NVMe - 2TB £340
6 x 2 TB HPE Enterprise SSD's (read intensive) ~£500

~£3670 total

Due to be updated in ~2 years....GPU will be upgraded next generation.

I will be more interested in the performance jump to my next PC, than how a console demo will run TBH /shrugs

Okay, so you've got an entirely new PC - all new components nothing upgraded and all appears to be top of the range. You already talking about your next PC as if you're going to replace every single component again.

It seems improbable, but let's play ball.

Are you going to be able to run benchmarks on that system throughout the course of the generation?

Edit: I priced your machine. Are you sure you haven't just created a fantasy machine using your favourite website? I can't see how many people would be stupid enough to buy an entirely new machine and disregard their entire pre-existing PC. A screenshot would definitely be preferable.

And anyway, if you really do have it - it would definitely still be fun to see how a machine costing nearly £4000 compares over the course of a generation. Especially since it almost exactly matches to 10x the price of the PS5/XSX.
 
Last edited by a moderator:
Yes, but the trick here is what is "the whole asset" -- something like geometry textures has very different size requirements than a .obj file. (we know for a fact nanite isn't geometry textures, their lead dev said as much, but I'd bet it's something along the lines in terms of being a compressed representation that has different rendering properties than a raw mesh.) I'd speculate there's an absolutely 0% chance that nanite is scanning through entire raw megascans on disc then doing whatever magic to get the parts it needs -- I believe nanite is going to be as much about the format of the meshes as the tech to render them. Even more traditional pipelines are getting in to more heavily pre-processing meshes (such as dx12u's meshlets approach), i think it's a safe bet that nanite meshes are going to be pretty different from regular geometry files.



Yes - my bet here is: it will run without fine without directstorage (on suitably fast cpus and gpus). We'll be able to find out, since this demo or technically similar demos will run on PCs that are incompatible with directstorage


See, this is a super important datapoint.I'm speculating, but imo the 4.5ms a frame is another data point that the ssd is nowhere near the bottleneck here -- there's no way you could stream in all that raw geo every frame in 4.5ms on a 5.5gb/s hd. Something else is going on! Most likely something cpu and gpu intensive. (Decompressing and reconstructing some kind of data structure friendly to this sort of rendering)



As above in this post, I agree the source assets (what goes in to the builds, not what goes onto the disk) are big.



I don't think this is true -- ue5 (or some preview builds at least) is coming out this year right? As soon as it's out, some indie dev will slap every quixel model into a map and make a build.

Again the Epic devs themselves said this is too big for videogames. I think they know better than you if it is viable for videogames or not. The solution stream some of the game data from the cloud if you want to keep the same quality or reduce the asset quality. They created this demo, reduce the geometry details is needed in a shipped videogame, compression is not magic, The format of the data is different not the RAW megascan asset and probably better for compression but this does not change the fact that the asset quality is too big for videogames.

Brian Karis is behind Nanite idea.

There are two compression formats the one on the disk, it will be decompress by the I/O mechanism with the hardware decompressor on console and one in RAM at runtime but the process is not free out of decompression. There are two compute shader software rasterizers for pixel size triangle and for triangle bigger than a pixel another primitive/mesh shader using the hardware rasterizer for the Nanite process, it was told by Brian Karis. Decompression from disk and in RAM and rasterize the geometry was 4.5 ms on PS5 when they made the slide, Brian Karis told it is less now, it was before some optimizations.


The limitations are the storage speed(asset quality) and hardware power(resolution or reduce geometric details). Nanite try to display 1 polygons per pixel, the resolution can decrease if you have a storage fast enough for display the best asset but a hardware not powerful enough for render them at 4k for example like the PS5(1404p on average in the demo) or XSX. Another solution if the hardware is not fast enough is to reduce the geometric details and it works too if the storage is too slow to stream the data inside RAM.

Brian Karis saying this is not viable for videogames and the problem is storage size.

This is funny to see people doing some speculation like they know better than the guy working on it since years and having this idea since more than a decade. We can add virtual shadow mapping but it is count inside VT I suppose. Texture are 8k in the demo and Shadow maps are 16K. 1 million triangles is equivalent to a 4k texture size they said in the Unreal Engine fest video.


Tim Sweeney and other Epic devs hint the demo will be available, everyone will be able to test it at least on PC to verify the SSD speed needed for example or how it scale. We will be able to see the size it take on storage.

https://www.artstation.com/artwork/bK5Vvv

Some image of the non textured Zbrush model here
https://www.artstation.com/artwork/kDl2aA

https://www.artstation.com/artwork/nYEDvE

EDIT: Added twitter thread from The Nanite creator.
 
Last edited:
That's a weird thing to say because the need for a fast SSD is not a requirement for using Unreal Engine 5. UE5 is not an engine only for fast-flash devices,

But Nanite and Lumen are:


"And with features for scaling the content down to run on current generation platforms using traditional rendering and lighting techniques"


We know UE5 scales across basically everything, but the Edge article makes clear (as per the quotes I've already highlighted in my post above to Tottentranz) that it was the move from spinning HDD's to flash based SSD's that enabled the next gen tech of Nanite and Lumen. Hence why they were not possible without the SSD's in the new consoles as Tim Sweeney has literally already stated (again, see above post to Tottentranz).

I don't understand why you are so unbelievably focussed on the SSD, which has long-standing been an option for PCs where RAID SSD should theoretically eliminate loading times but don't because the fundamental I/O framework is the choke point. This is what Direct Storage is aimed to improving.

Because with or without Direct Storage, SSD's still vastly increase available IO bandwidth (especially if game data is organised to take advantage of them) and more importantly for Nanite/Lumen, they reduce seek times and thus the latency of data requests by orders of magnitude. Direct Storage makes the whole process more efficient and less of an overhead on the CPU but you're acting as if there's literally no advantages of SSD's over HDD's without something like DirectStorage and a hardware decompression unit which is blatantly false.

The underlying I/O as administered by the OS in PCs and consoles is already light on CPU usage. The CPU hit comes by having game data stored in .PAK (zip) files and having to load them them in memory for the CPU to pick apart the data, decompress anything for the CPU into main memory and sometimes decompress and send data for the GPU over the PCIE bus on PC. That's your CPU usage which Microsoft cannot fix with Direct Storage because this is just how games are written.

This is wrong. The CPU IO overhead even outside of the decompression requirements is still relatively high:

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

"The final component in the triumvirate is an extension to DirectX - DirectStorage - a necessary upgrade bearing in mind that existing file I/O protocols are knocking on for 30 years old, and in their current form would require two Zen CPU cores simply to cover the overhead, which DirectStorage reduces to just one tenth of single core."

As Microsoft have made clear to Digital Foundry, there is scope to reduce that overhead by more than an order of magnitude via Direct Storage. However I do agree with you that the overhead associated with decompression is even higher.

DirectStorage may well deal with both though. RTX-IO certainly deals with both in combination with DirectStorage but there is some suggestion that DIrectStorage itself may be the framework for GPU based decompression with RTX-IO simply being the Nvidia branding of it. We've yet to find out for certain.

https://www.anandtech.com/show/1620...-starts-at-the-highend-coming-november-18th/2

"Finally, AMD today is also confirming that they will offer support for Microsoft's DirectStorage API. Derived from tech going into the next-gen consoles, DirectStorage will allow game assets to be streamed directly from storage to GPUs, with the GPUs decompressing assets on their own."

My key point here though is that these are CPU overheads which it's certainly great to reduce, but doesn't mean that games can't still take advantage of SSD's even without these reductions. It simply means that you need a proportionately more powerful CPU to cope with the overhead as data transfer rates ramp up. The statement from the Epic engineer is exactly in line with that. He states data transfer rates in the UE5 demo aren't that high (by SSD standards) but you nevertheless need a powerful CPU. Presumably DirectStorage would mean you need a less powerful CPU to achieve the same thing. That in no way means that it can't be achieved today though.

I'm not claimed this. :???:

Okay so just to be clear, are you or are you not arguing that the UE5 demo "Lumen in the Land of Nanite" could be run on todays PC's without Direct Storage?

I've not claimed this either this. You seem to be posting on the basis that the improvements are about reducing the CPU overheard of I/O whereas this isn't the case. CPU driven I/O has not been a thing for literally generations. Some improvements will come of re-arranging data that is does not need to be picked apart by the CPU first (i.e. file check in) but that's an option now.

Microsoft's Direct Storage API can't change this, games are written how they are written.

See my previous response on this. In any case it appears that you are trying to assert that the UE5 demo is not possible on todays PC's because despite them having SSD's, they don't have the rest of the console IO stack to support them. If that's not your argument then I'd suggest we're not in major disagreement and leave it there.

This was your assertion, not me:

You're literally arguing with yourself here :mrgreen: and at this point I really an out because it feels like you are just arguing with people for the sake of arguing and you're not interested in learning or discussing anything.

I think you misunderstood my post. I wasn't making that argument, I was repeating your argument for the benefit of the reader as the thread of the conversation was lost when you quoted me. You asked (paraphrasing) "How could Tim Sweeney say the demo can't run on PC (even if directly asked)?" Suggesting that's a question he couldn't answer because of the variation in PC hardware configurations. So I answered that if it was impossible to run the demo on any PC due to it requiring an IO capability that is beyond current PC's, it should be straight forward for him to answer. The point being, that he hasn't said anything of the sort.
 
Not many have 7gb/s nvme drives on pc yet i think, but adaption might get faster then one could think when prices adjust abit.

Anyway, by the time actual UE5 games see the light, on pc there'd be nvme (with direct storage) solutions a magnitude faster then the consoles.
on pc's that have 0.1% users
 
Back
Top