Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Yes the quotes I selected are "ignoring parts of the interview because they don't align with my interpretations", but the quotes you selected are the "representation of the truth".

The quotes I've highlighted are in addition to those you highlighted in order to add additional context. I'm not ignoring any part of the article, I simply prefer not to cherry pick parts of it out of context so as to re-enforce my own interpretation. If you feel the quotes I've picked are missing additional context from the article then do feel free to point that out.

Yes he had access to the runtime demo in a windows PC. Which is why he ran a video at the livestream.
Makes complete sense.

Is this really the best argument you have for claiming this individual isn't an Epic employee and/or is simply lying? Here's the video itself, my Chinese isn't great but even I can tell this is legitimate and they're clearly talking about the demo. You can even check the time stamps at 53:00 and 2:07:00 to clearly see they are talking about specs and streaming speed. The translations for those points are also in the comments below the video. It's probably time you dropped this now.


"Pretty good performance" and "some fast storage" doesn't tell you it'd run faster or even that the current storage solutions for the PC would be capable.

Why would he state you need "some fast storage" to get get "pretty good performance" at the same detail levels as the PS5 if it is in fact impossible to run the game on PC at the same detail levels as the PS5 because said storage solution doesn't exist? That would be a pretty strange statement to make would it not? Wouldn't he simply have said "you can't because PC's lack the storage solutions at the moment"? I'd say you're really stretching on this point alone, but in combination with the video above I'm afraid you're straying into complete denial territory.

But it's nice that we're already getting away from the muh laptop runs the demo faster than the consoles nonsense.

This is not something I've claimed. The fact is that we don't know how they compare in performance because one is locked and the other one is likely unlocked, but also "not cooked". We can at this stage be quite certain that the 2080m powered laptop in question can run the demo that was shown on the PS5 at around 40fps.

There are tons of statements from Epic developers saying the new consoles' (plural, so SeriesX / S included!) I/O subsystems with dedicated I/O accelerators for lower latency and higher effective throughput are crucial to making the UE5 tech demo possible.

Yes, I even fucking bolded that sentence in the original post because I just knew someone would try to troll this down to fanboy warring.

I know this wasn't directed at me but it's an interesting point so I'd like to pick up on it. It seems here that you are admitting that the UE5 demo would be perfectly possible on the XSX which I agree is likely. However at the same time you are arguing that it's entirely impossible on a modern PC. I'd really like to understand how you're arriving at these seemingly contradictory conclusions? The XSX IO system is capable of 4.8GB/s with decompression. There are already drives available today which exceed this performance without decompression. We also know thanks to the DF article that Microsoft claims the IO system inclusive of decompression would saturate the equivalent of 5 Zen2 cores (and without decompression, 2 Zen2 cores). There are obviously CPU's available today which are both faster and have many more cores than this.

So how can you conclude that the demo would be totally impossible on a modern PC but entirely possible on the XSX?[/quote]
 
of course :LOL:

At best standard SSD ten time faster than the one in PS5 will arrived in 2029/2031 with PCIE 7(64GB/s). PCIE 5(16 GB/s in theory) and PCIE 6(32 GB/s in theory) specification are released and they aren't fast enough. 2022 for PCIE 5 and 2025/2026 for PCIE 6.

The quotes I've highlighted are in addition to those you highlighted in order to add additional context. I'm not ignoring any part of the article, I simply prefer not to cherry pick parts of it out of context so as to re-enforce my own interpretation. If you feel the quotes I've picked are missing additional context from the article then do feel free to point that out.



Is this really the best argument you have for claiming this individual isn't an Epic employee and/or is simply lying? Here's the video itself, my Chinese isn't great but even I can tell this is legitimate and they're clearly talking about the demo. You can even check the time stamps at 53:00 and 2:07:00 to clearly see they are talking about specs and streaming speed. The translations for those points are also in the comments below the video. It's probably time you dropped this now.




Why would he state you need "some fast storage" to get get "pretty good performance" at the same detail levels as the PS5 if it is in fact impossible to run the game on PC at the same detail levels as the PS5 because said storage solution doesn't exist? That would be a pretty strange statement to make would it not? Wouldn't he simply have said "you can't because PC's lack the storage solutions at the moment"? I'd say you're really stretching on this point alone, but in combination with the video above I'm afraid you're straying into complete denial territory.



This is not something I've claimed. The fact is that we don't know how they compare in performance because one is locked and the other one is likely unlocked, but also "not cooked". We can at this stage be quite certain that the 2080m powered laptop in question can run the demo that was shown on the PS5 at around 40fps.



I know this wasn't directed at me but it's an interesting point so I'd like to pick up on it. It seems here that you are admitting that the UE5 demo would be perfectly possible on the XSX which I agree is likely. However at the same time you are arguing that it's entirely impossible on a modern PC. I'd really like to understand how you're arriving at these seemingly contradictory conclusions? The XSX IO system is capable of 4.8GB/s with decompression. There are already drives available today which exceed this performance without decompression. We also know thanks to the DF article that Microsoft claims the IO system inclusive of decompression would saturate the equivalent of 5 Zen2 cores (and without decompression, 2 Zen2 cores). There are obviously CPU's available today which are both faster and have many more cores than this.

So how can you conclude that the demo would be totally impossible on a modern PC but entirely possible on the XSX?

There is a bit more than decompression and direct access to VRAM in DirectStorage

https://devblogs.microsoft.com/directx/directstorage-is-coming-to-pc/

Game workloads have also evolved. Modern games load in much more data than older ones and are smarter about how they load this data. These data loading optimizations are necessary for this larger amount of data to fit into shared memory/GPU accessible memory. Instead of loading large chunks at a time with very few IO requests, games now break assets like textures down into smaller pieces, only loading in the pieces that are needed for the current scene being rendered. This approach is much more memory efficient and can deliver better looking scenes, though it does generate many more IO requests.

Unfortunately, current storage APIs were not optimized for this high number of IO requests, preventing them from scaling up to these higher NVMe bandwidths creating bottlenecks that limit what games can do. Even with super-fast PC hardware and an NVMe drive, games using the existing APIs will be unable to fully saturate the IO pipeline leaving precious bandwidth on the table.

That’s where DirectStorage for PC comes in. This API is the response to an evolving storage and IO landscape in PC gaming. DirectStorage will be supported on certain systems with NVMe drives and work to bring your gaming experience to the next level. If your system doesn’t support DirectStorage, don’t fret; games will continue to work just as well as they always have.

In either case, previous gen games had an asset streaming budget on the order of 50MB/s which even at smaller 64k block sizes (ie. one texture tile) amounts to only hundreds of IO requests per second. With multi-gigabyte a second capable NVMe drives, to take advantage of the full bandwidth, this quickly explodes to tens of thousands of IO requests a second. Taking the Series X’s 2.4GB/s capable drive and the same 64k block sizes as an example, that amounts to >35,000 IO requests per second to saturate it.

Existing APIs require the application to manage and handle each of these requests one at a time first by submitting the request, waiting for it to complete, and then handling its completion. The overhead of each request is not very large and wasn’t a choke point for older games running on slower hard drives, but multiplied tens of thousands of times per second, IO overhead can quickly become too expensive preventing games from being able to take advantage of the increased NVMe drive bandwidths.

NVMe devices are not only extremely high bandwidth SSD based devices, but they also have hardware data access pipes called NVMe queues which are particularly suited to gaming workloads. To get data off the drive, an OS submits a request to the drive and data is delivered to the app via these queues. An NVMe device can have multiple queues and each queue can contain many requests at a time. This is a perfect match to the parallel and batched nature of modern gaming workloads. The DirectStorage programming model essentially gives developers direct control over that highly optimized hardware.

In addition, existing storage APIs also incur a lot of ‘extra steps’ between an application making an IO request and the request being fulfilled by the storage device, resulting in unnecessary request overhead. These extra steps can be things like data transformations needed during certain parts of normal IO operation. However, these steps aren’t required for every IO request on every NVMe drive on every gaming machine. With a supported NVMe drive and properly configured gaming machine, DirectStorage will be able to detect up front that these extra steps are not required and skip all the necessary checks/operations making every IO request cheaper to fulfill.

For these reasons, NVMe is the storage technology of choice for DirectStorage and high-performance next generation gaming IO

This is the problem out of decompression and direct access between the SSD and the VRAM.

It does this in several ways: by reducing per-request NVMe overhead, enabling batched many-at-a-time parallel IO requests which can be efficiently fed to the GPU, and giving games finer grain control over when they get notified of IO request completion instead of having to react to every tiny IO completion.

In this way, developers are given an extremely efficient way to submit/handle many orders of magnitude more IO requests than ever before ultimately minimizing the time you wait to get in game, and bringing you larger, more detailed virtual worlds that load in as fast as your game character can move through it.

And part of the solution.
 
Last edited:
Okay, so you've got an entirely new PC - all new components nothing upgraded and all appears to be top of the range. You already talking about your next PC as if you're going to replace every single component again.

It seems improbable, but let's play ball.

Are you going to be able to run benchmarks on that system throughout the course of the generation?

Edit: I priced your machine. Are you sure you haven't just created a fantasy machine using your favourite website? I can't see how many people would be stupid enough to buy an entirely new machine and disregard their entire pre-existing PC. A screenshot would definitely be preferable.

And anyway, if you really do have it - it would definitely still be fun to see how a machine costing nearly £4000 compares over the course of a generation. Especially since it almost exactly matches to 10x the price of the PS5/XSX.

Last reply about me, my person and my upgrade path:

The HPE SSD's are from my work, zero cost.
But yeah, ever since socket 1336 (X58) I have always gone top of the line and replaced my rig ever 2-3 years, the GPU often more.

I work in IT, and gaming is my cheapest hobby.
My martial arts hobby cost way more.
Besides I am at a point in my life where I have $1800 of "fun money" every month so no biggie.

I also have a 7.1 THX calibrated soundset for gaming.

I go all in with my hobbies, if that give you an issue...that is your problem.

And why I am quite sure that my PC always will beat consoles in I.Q./performance.

Now when UE5 comes out to PC I will make sure to bench it for you...deal?
 
But Nanite and Lumen are:
Epic disagree:

Tim Sweeney said:
"You could render a version of this [demo on a system with an HDD], it would just be a lot lower detail"

Like mots features of previous version of UE, it's scaleable.

Direct Storage makes the whole process more efficient and less of an overhead on the CPU but you're acting as if there's literally no advantages of SSD's over HDD's without something like DirectStorage and a hardware decompression unit which is blatantly false.
Where are you getting this information about Direct Storage?

This is wrong. The CPU IO overhead even outside of the decompression requirements is still relatively high

Relative to what? :???: Disk operations should not be using significant CPU usage on any platform. Windows is the outlier here because of Window's architecture. But your average sustained I/O on any typical device shouldn't be using more than 5% of your CPU. Window's issue isn't that it causes high CPU usage, it's just the fundamental model bottlenecks I/O and introduces latency. You can test this on your own PC. Unless you're runing an IDE controller from the 1990s, you should not be having issues.

As Microsoft have made clear to Digital Foundry, there is scope to reduce that overhead by more than an order of magnitude via Direct Storage. However I do agree with you that the overhead associated with decompression is even higher.
Can you provide a link to this? I've seen this oft quoted on resetera but never seen the article itself.

My key point here though is that these are CPU overheads which it's certainly great to reduce, but doesn't mean that games can't still take advantage of SSD's even without these reductions. It simply means that you need a proportionately more powerful CPU to cope with the overhead as data transfer rates ramp up. The statement from the Epic engineer is exactly in line with that. He states data transfer rates in the UE5 demo aren't that high (by SSD standards) but you nevertheless need a powerful CPU. Presumably DirectStorage would mean you need a less powerful CPU to achieve the same thing. That in no way means that it can't be achieved today though.

You need to accept that he references to CPU usage are not about I/O but the way data is stored and the data flow for getting it into a usable state, not inherently the I/O itself. This is different. You also need to stop making assumptions about Direct Storage given Microsoft themselves have said very little about the implementation on Windows.

Okay so just to be clear, are you or are you not arguing that the UE5 demo "Lumen in the Land of Nanite" could be run on todays PC's without Direct Storage?
I'm stating neither. I've made literally no references to lumen or nanite. :???:

I think you misunderstood my post. I wasn't making that argument, I was repeating your argument for the benefit of the reader as the thread of the conversation was lost when you quoted me.
No I don't misunderstand your post. This is like your dogged determination to get me to have some kind of view on lumen/nanite above, and is something I didn't bring up. You keep tossing more and more things into the discussions none of which are to do with I/O which is the only thing I originally comment on. :(

I am really beginning to hate these awful 'debates' because every technical discussion turns into an absolute cluster fuck of people just tossing in any old bollocks to every exchange. I'm out at this point. The same people holding Direct Storage up as some panacea for I/O issues are the same people who denied PCs even had an I/O issues once Sony's I/O bandwidth number were made public. It's exhausting, you believe what you like, I don't care.
 
Agreed. VRAM is backed by system RAM on a PC and a mid-range to high-end PC can have anywhere from 24GB-56GB of RAM. A big chunk will be the slow system variety but that slow RAM is still many times faster than the consoles’ SSDs.

That slow SSD on the PC is going to see less traffic over the course of multiple frames being rendered so SDD bandwidth becomes less important.

I'd rather have something like an Optane solution...... Anyway yes, consoles basically lack any kinds of system ram, the SSD somewhat mititagates for that, nowhere close to what even todays DDR4 system ram does in terms of raw speed/latency. Even DDR3 outmatches them, (old triple channel setups).

of course :LOL:

7gb/s before compression is already a whole lot faster then 5. 2gb/s is close to what the XSX delivers before compression. With GPU compression in action i already call that magnitudes faster.

At best standard SSD ten time faster than the one in PS6 will arrived in 2029/2031 with PCIE 7. PCIE 5(16 GB/s in theory) and PCIE 6(32 GB/s in theory) specification are released and they aren't fast enough. 2022 for PCIE 5 and 2025/2026 for PCIE 6.

Thats some intresting theories as usual. Top end pc solutions today already are faster then the PS5s. Dont see that changing all that much IF the PS6 will ever come.
Hopefully not, its enought with being held back.

Last reply about me, my person and my upgrade path:

Just ignore him, before he gets even more personal and other sony fans intervene. It has happened before and it will happen again in due time. Report anything offtopic (claming requests of screens of system specs is) and use the ignore button if so desired.

But yeah, ever since socket 1336 (X58) I have always gone top of the line and replaced my rig ever 2-3 years, the GPU often more.

Yeah, i dont see why having a top of the line system seems so.... unplausible to some. In the grand scheme of things, its nowhere near big money. But again, segegration does exist.

And why I am quite sure that my PC always will beat consoles in I.Q./performance.

They always have, they always will. Even in IO raw performance their already outmatched, DS will improve on the whole sitation though. Im more impressed on the findings of HDD vs SSD vs NVme ssd in star citizen then any of the PS5s showcasings. Same for CP2077, the streaming tech going from one to another location is amazing. Playing that game with a HDD is almost impossible. In Doom Eternal, you go from 20s loads to zero. Yet, there is chris1515 claiming going from HDD to SSD doesnt improve so much :)

Now when UE5 comes out to PC I will make sure to bench it for you...deal?

High end pc's are going to tear through that demo. A fast 7gb/s (14 and much more with gpu compression) nvme, 3080/6800XT and up, fast system ram, Imagine the demo maxed out.

The same people holding Direct Storage up as some panacea for I/O issues are the same people who denied PCs even had an I/O issues once Sony's I/O bandwidth number were made public. It's exhausting, you believe what you like, I don't care.

Things got a whole lot different after the nvidia showcase though. All those bottlenecks are being worked on, the raw performance is already there and its already amazing enough, good enough for the UE5 demo atleast.
 
Things got a whole lot different after the nvidia showcase though. All those bottlenecks are being worked on, the raw performance is already there and its already amazing enough, good enough for the UE5 demo atleast.
Specifically which Windows bottlenecks does Nvidia's engineering resolve?
 
Your left in the dust without nvme tech in star citizen (see from about 3.10mins in the below video. Optane does give even better performance due to the latency advantages it has. The hyped ramdisk that never saw reality for PS5 would have been closer to that.

[4K] Star Citizen: A Next-Gen Experience In The Making... And You Can Play-Test It Now - YouTube

Game is a mess tho , my 1700x with 32gigs of ram and a pci-e3 nvme drive with a 3080 or vega 56 can't break 30fps in 4k . I'm assuming its my cpu at this point
 
Last reply about me, my person and my upgrade path:

The HPE SSD's are from my work, zero cost.
But yeah, ever since socket 1336 (X58) I have always gone top of the line and replaced my rig ever 2-3 years, the GPU often more.

I work in IT, and gaming is my cheapest hobby.
My martial arts hobby cost way more.
Besides I am at a point in my life where I have $1800 of "fun money" every month so no biggie.

I also have a 7.1 THX calibrated soundset for gaming.

I go all in with my hobbies, if that give you an issue...that is your problem.

And why I am quite sure that my PC always will beat consoles in I.Q./performance.

Now when UE5 comes out to PC I will make sure to bench it for you...deal?

This is why I love the internet. Thank you.
 
Regarding posts above: I don't think dick measuring about whether your personal pc can outperform a console is really productive to the ue5 discussion. We're speculating what software will ship based on average consumer hardware, not what the ultra high end will do.

@Chris, we're reading all the same quotes, we just have wildly different takeways. Maybe it's my background as a 3d artist, but I feel like everything you linked argues my point.

Again the Epic devs themselves said this is too big for videogames. I think they know better than you if it is viable for videogames or not. The solution stream some of the game data from the cloud if you want to keep the same quality or reduce the asset quality. They created this demo, reduce the geometry details is needed in a shipped videogame, compression is not magic, The format of the data is different not the RAW megascan asset and probably better for compression but this does not change the fact that the asset quality is too big for videogames.
Its silly to think that cloud streaming will be an answer when ssd's arent -- many users' data caps are still smaller than the ps5 ssd, let alone speeds. Who wants 20 minute load times!?


This is funny to see people doing some speculation like they know better than the guy working on it since years and having this idea since more than a decade. We can add virtual shadow mapping but it is count inside VT I suppose. Texture are 8k in the demo and Shadow maps are 16K. 1 million triangles is equivalent to a 4k texture size they said in the Unreal Engine fest video.

This is what I'm talking about! There's too much variation on formats to really speculate, but since you're using this as evidence I will too: Using dxt1, a 4k texture is like 10mb. A 1 mil tri mesh is ~50mb as a .obj. You can already get much smaller with draco or something, but decompression is measured in seconds there, not viable for streaming. I don't know much at all about compressed formats, but 1mil = 4k texture sounds cutting edge to me.

Something is going on here! If they mean that every 1mil is 10mb, these are shippable games as is. (Not to mention the comments to just "drop a few mips" like with textures -- if it turns out karis means "just throw it through simplygon and take a lod" i think devs will be very disappointed -- nanite meshes are probably scaleable in a different way than traditional unprocessed geometry. Again, maybe something along the lines of geometry textures conceptually.)
 
Almost certainly. Seeing DF's video it runs nicely, but with a much more capable CPU.
looks like he hits 40fps so i dunno. Crytek engine was never optimized well for cpus and star citizen isn't either. They need to do a lot more work. Its why i don't think the game will be out before 2024
 
What's the big deal with the Chinese stream? They're playing an mp4 recording of the PS5 UE5 demo. It's not running on the PC :/ And they basically said that running it on a 2070 with an SSD might be possible but it really requires a fast I/O subsystem coupled with a fast SSD. That's the tech that Epic were working with Sony to perfect, the ability to place high detail models straight into the right location in memory without the usual overhead.
 
Regarding posts above: I don't think dick measuring about whether your personal pc can outperform a console is really productive to the ue5 discussion. We're speculating what software will ship based on average consumer hardware, not what the ultra high end will do.

@Chris, we're reading all the same quotes, we just have wildly different takeways. Maybe it's my background as a 3d artist, but I feel like everything you linked argues my point.


Its silly to think that cloud streaming will be an answer when ssd's arent -- many users' data caps are still smaller than the ps5 ssd, let alone speeds. Who wants 20 minute load times!?




This is what I'm talking about! There's too much variation on formats to really speculate, but since you're using this as evidence I will too: Using dxt1, a 4k texture is like 10mb. A 1 mil tri mesh is ~50mb as a .obj. You can already get much smaller with draco or something, but decompression is measured in seconds there, not viable for streaming. I don't know much at all about compressed formats, but 1mil = 4k texture sounds cutting edge to me.

Something is going on here! If they mean that every 1mil is 10mb, these are shippable games as is. (Not to mention the comments to just "drop a few mips" like with textures -- if it turns out karis means "just throw it through simplygon and take a lod" i think devs will be very disappointed -- nanite meshes are probably scaleable in a different way than traditional unprocessed geometry. Again, maybe something along the lines of geometry textures conceptually.)

Do you know they do this in Flight Simulator 2020?
https://www.techradar.com/news/micr...tential-of-azure-cloud-computing-in-pc-gaming

I know this means much more data with what Epic is saying.

I am lucky enough to live in Europe where data caps doesn't exist for home internet, we only have data caps in mobile internet when the data caps is reached you can continue to use internet but not in 4G. Some countries have unlimited mobile internet too.

img_2027.jpg


Here this is slow down by the PS4 HDD. I have a 600 Mbits internet and my installation speed is limited by the HDD not the internet connection. When I wll have my PS5, I will probably be able to load 10 GB in 2 minutes. I lived in multiples european country and France, Switzerland, Luxembourg, Spain and Germany have great Internet connection if you live in a city. I went for job to Portugal it looks good too but I was not there long enough to be sure.

For Kraken geometry is generally 2 or 3 to 1 but he talks about mips probably something similar to geometry textures but again they know the size of the demo this is probably not enough to reach shippable games. And some studio already have a version of the engine.

I never said it is the same than the .obj but it seems they think the demo assets used in Unreal Engine 5 demo are too big to be used in a game. As a 3d artist you must know than if they said this they know about what they are talking about. This is not like Epic deliver one of the biggest commercial game engine and they too shipped one of the biggest game with Fortnite.

This is the guy who created Nanite who said this is too much for shippable game but you can reduce the quality and he talked about mips probably geometry as a texture. I believe him more than you, he is much more believable. This is incredible people who believe they know better than the guy who created the technology and who was part of the team doing the demo. Internet is more crazy everyday.


The statue we say is 33M triangles? This shows just what that means. Now I wouldn't recommend or expect this for a common game asset due to the unnecessary size it would take on disk. This is a stress test to show off that the tech scales like we say it does.

In bold the part where he said this is unrealistic to have this type of assets in a game. There is no interpretation, gamers don't hope the same level of detail in a game, this is not realistic. This is just a demo.

EDIT: And fit games on disc is a problem today with non duplication of assets like with a HDD and better compression, it will improve the situation but at the same time the quality of assets will explode. We have double the BR size and with internet patch I expect devs will try to not go above 200 GB for a game.

https://bartwronski.com/2020/12/27/...allenge-productionizing-rendering-algorithms/

Finally, another “silent” budget is disk space. This isn’t discussed very often, but with modern video game pipelines we are at a point where games hardly fit on Blu Ray disks (it was a bigger challenge on God of War than fitting in memory or the performance!).

Same it was difficult for Spiderman
 
Last edited:

I'm kinda suspicious how much in terms of actual megabytes of geometry and texture detail fs streams -- surely its something, but if it's just like "road detail textures" that's less interesting. Either way, the geometry in your local environment changes much (much -- probably minutes vs milliseconds!) slower in fs than in an average commercial game (the kind of game that ue5 will have to serve hundreds of) and is much more predictable (x by x km region of terrain, not like ~20/600 props and interiors and etc.) Your inernet is definitely better than me, but I'm sure you know how bad american internet is and how big the american market is.

In bold the part where he said this is unrealistic to have this type of assets in a game. There is no interpretation, gamers don't hope the same level of detail in a game, this is not realistic. This is just a demo.

There's no interpretation that 33m is wasteful for that statue -- it's clearly not even visibly contributing detail anywhere in the demo.

But there's a ton of room between "33m" and "regular game asset just like everyone else" -- if theyre shipping 3m-5m average tri count assets (probably about what you'd target if you wanted it to be completely impossible to detect any change) its still 10+x more than most games. (And that 33m asset will run fine on pcs with average ssds and without directstorage, i think, just won't be practical to ship an entire videogame like that.)

Another assumption here -- i'd bet money the additional debug screens from that twitter thread are running on his personal PC, not a ps5.
 
I'm kinda suspicious how much in terms of actual megabytes of geometry and texture detail fs streams -- surely its something, but if it's just like "road detail textures" that's less interesting. Either way, the geometry in your local environment changes much (much -- probably minutes vs milliseconds!) slower in fs than in an average commercial game (the kind of game that ue5 will have to serve hundreds of) and is much more predictable (x by x km region of terrain, not like ~20/600 props and interiors and etc.) Your inernet is definitely better than me, but I'm sure you know how bad american internet is and how big the american market is.



There's no interpretation that 33m is wasteful for that statue -- it's clearly not even visibly contributing detail anywhere in the demo.

But there's a ton of room between "33m" and "regular game asset just like everyone else" -- if theyre shipping 3m-5m average tri count assets (probably about what you'd target if you wanted it to be completely impossible to detect any change) its still 10+x more than most games. (And that 33m asset will run fine on pcs with average ssds and without directstorage, i think, just won't be practical to ship an entire videogame like that.)

Another assumption here -- i'd bet money the additional debug screens from that twitter thread are running on his personal PC, not a ps5.

I never said asset quality will not improve but mandalorian asset quality is probably not realistic in a game and I know artists will be very good with asset between 1 to 5 millions and 4k textures. But probably for the first time since game cartridge this is funny to have fast storage and not being able to fully used them because the storage size is not big enough.

The debug screen is coming from the Unreal Engine fest presentation. This is a screenshot coming from this presentation.

 
I never said asset quality will not improve but mandalorian asset quality is probably not realistic in a game and I know artists will be very good with asset between 1 to 5 millions and 4k textures. But probably for the first time since game cartridge this is funny to have fast storage and not being able to fully used them because the storage size is not big enough.

The debug screen is coming from the Unreal Engine fest presentation. This is a screenshot coming from this presentation.

yeah, I agree that ssds (regular ssds without directstorage though :p) will be able to run nanite data too big to actually ship in a game, i've always been arguing with the claim that this demo couldn't run on a regular high end laptop at the time it was created and needed the ps5's super fast ssd and io.

Bolded part: my bad been a long time since i watched it.
 
yeah, I agree that ssds (regular ssds without directstorage though :p) will be able to run nanite data too big to actually ship in a game, i've always been arguing with the claim that this demo couldn't run on a regular high end laptop at the time it was created and needed the ps5's super fast ssd and io.

Bolded part: my bad been a long time since i watched it.

I find it sad on PC or consoles, imagines a game with mandalorian assets quality at 4k on a 3090.
 
I find it sad on PC or consoles, imagines a game with mandalorian assets quality at 4k on a 3090.
I don't think the reality will be far off (as far as meshes are concerned). If the 1mil = 4k texture claim is true, and if it's true that nanite can save on a lot of texture space (less normal maps, etc), 5mil tri props will be very feasible on ~100-200gb games, and you won't be able to tell the difference between 5 and 30 on most subjects.
 
Back
Top