Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

@ToTTenTranz I think @Shifty Geezer was talking about your first contribution after one of your benders?
There's still not a single sentence in that post saying the demo is exclusive to the PS5. Only that there's no hardware equivalent on the PC that replaces the PS5's I/O performance.
The only time I reference the SeriesX is when I say its base SSD speed is halved so it might have repercussions on how much geometry it can take per frame.
If Shifty got triggered by that one sentence to the point of throwing fanboy accusations, then well.. that's on him.




Obviously a typo, they meant a Volumetric Hog. :yep2:
Damn, I was sure it was Volumetric Log.. Guess I was wrong.

Iyvb4rn.jpg







For the last time, no. A software API does not negate all of the driver and device I/O that Windows does.
Then if IO performance on Windows PCs isn't going to match the next-gen consoles any time soon, due to software and hardware limitations, how about the following alternatives?


1 - A PCIe add-in board with an ASIC that decompresses data and connects directly to the GPU through an IFLink (similar to what AMD put on the recent Radeon Pro VII but simpler / narrower) and/or a NVLink for nvidia cards (not possible anytime soon)

2 - The game itself requiring humongous amounts of RAM, like 32GB minimum / 64GB recommended, so that the GPU is fed directly from the RAM and not the I/O. This would require loading times to decompress the current area's geometry into the main RAM first, and then it'd need some smart prediction to constantly feed the RAM with assets that are expected to be used afterwards (possible right now).




neither GPU was powerful, RTX 2080 mobile is equivalent to desktop RTX 2070,
the lap top have the 2080 Max-Q version, this is weaker than rtx 2080 mobile, equivalent to desktop RTX 2060. The NVIDIA GeForce RTX 2080 with Max-Q design is the power saving variant of the mobile RTX 2080 with reduced clock speeds and power consumption
If that laptop had a 2080 Max-Q then its performance is close to a desktop GTX 1080 / RTX2060 / RX5700, all of which are on a lower performance bracket than either console.
 
The only way Microsoft can fix this in software is to toss pretty much most of the Windows code in the bin, along with the filesystem, and start over. This will break everything. It's a non starter.

Don't forget about the mythical GameCore integration. Wouldn't surprise me that parts of DirectX 12 Ultimate have yet to be fully exposed. If what has been hinted at comes to fruition it might bring more console-like integration to PC games.

Tommy McClain
 
If Shifty got triggered by that one sentence to the point of throwing fanboy accusations, then well.. that's on him.

All the points you raised and material you provided had already been linked to this thread. Probably linked multiple times and discussed to death. This discussion is just repeating again and again as new people come in, didn't read the thread and start discussing things where conclusions were already reached. For someone who has followed the thread this discussion is like the movie groundhog day.

To make it worse every new iteration of this discussion starts with my favorite platform is the best or that technology is absolutely not needed and the biased platform warring ensues.
 
This discussion is just repeating again and again as new people come in, didn't read the thread and start discussing things where conclusions were already reached. For someone who has followed the thread this discussion is like the movie groundhog day.

To make it worse every new iteration of this discussion starts with my favorite platform is the best or that technology is absolutely not needed and the biased platform warring ensues.

Exactly. It's Deja-Vu Ad Nauseum.
 
There's still not a single sentence in that post saying the demo is exclusive to the PS5.
You're right, I was wrong. It was the third post since your return, not the first...

...they weren't made on a PC because it can't be done on a PC.
Now how much time did it take me to go back to your post and find the subsequent ones and count off the specifics when the gist of what I was saying isn't affected? And then to type this explanation, which is exactly the discussion I was telling you I wanted to avoid! I could go on and on about interpreting your first post, but it's no value to the discussion.

How many posts is this now about the discussion and people's choice of words and whether they number posts correctly or not? How much of this is valuable discussion about UE5? How many more words are going to spent talking about who said what and whether someone can count properly or not? It's not worth it for anyone involved to be that particular, to ensure they quote exactly and reference exactly. That's the standard trappings of many a discussion, arguing over who said what when. If we avoid absolutes and engage more in the spirit of the discussion, and don't haul people over coals for when they aren't 100% right on every detail, we can talk about the things we actually want to talk about.

I'm urging you to get into the spirit of more open discussion and not get bogged down with the constant back-and-forths of minutiae about who said what and how to interpret what they said. We're not a lawcourt! At the moment my beseeching seems to be falling on deaf ears as we're doing the same dance. Please let it go and just focus on talking about the tech in a less intense fashion. ;) That's what I'll do now, 100%, and I won't respond to any more discussion about the discussion.
 
Was it ever confirmed anywhere that RDNA1/2 supports HBCC?

Not as far as I'm aware but given this technology is about to come crashing into the mainstream with the new consoles its seems crazy to remove it at this point (it was always a forward looking feature being pretty useless on Vega).
 
AMD has had this capability in its commercial GPU's since Vega. It's just not enabled for SSD access in the drivers as far as I'm aware.

https://images.app.goo.gl/jqxW4grAjk3XLHsG8
https://www.amd.com/en/products/professional-graphics/radeon-pro-ssg

amd makes professional cards. It be nice if amd just put an nvme slot on the back of its RNDA 2 cards later this year so that we as a consumer can buy an ssd and add it to the card. Or as someone else suggested have a crossfire like connection to a PCI-E card with fast NVME drives.
 
Pure transfer speeds? Nope, theres faster, if you really want to. By the time ps5 releases theres 7gb/s drives. I’d rather have optane if it was less expensive though, they perform blazingly fast, more akin to ddr ram then nand.

There's faster yes but using RAID. Single current-gen NVMe PCIe SSD's, none are faster. Towards the end of the year sure, Cerny said as much as you can using 7GB/s SSD's minimum on the PS5 but even then on a PC there's driver overhead/OS/Filesystem overhead. Then also remember the PS5 can do 9.0GB/s compressed and if using Kraken, Cerny says that in some cases you can even get over 22GB/s. 9GB/s is out of reach for those 7GB/s SSD's.

I don't recall even RAID SSD's doing over 22GB/s unless in a datacenter/render farm running RAID with 10 or 20 SSD's. Maybe then.
 
There's faster yes but using RAID. Single current-gen NVMe PCIe SSD's, none are faster. Towards the end of the year sure, Cerny said as much as you can using 7GB/s SSD's minimum on the PS5 but even then on a PC there's driver overhead/OS/Filesystem overhead. Then also remember the PS5 can do 9.0GB/s compressed and if using Kraken, Cerny says that in some cases you can even get over 22GB/s. 9GB/s is out of reach for those 7GB/s SSD's.

I don't recall even RAID SSD's doing over 22GB/s unless in a datacenter/render farm running RAID with 10 or 20 SSD's. Maybe then.

22GB/s is max delivering of decompressor HW, not the SSD; SSD max TR is 5 GB/s (sequential, on a big big file?) and realistic 7-9GB/s at best, when (de)compressed with kraken HW-dec
and we don't know how it is cooled (SSD performances depends on temperature), and how it performs with little files (such as tiles of textures adressed directly on SSD etc) or when read and write requests compete for resource

a datacenter with 10-20 2.000$+ (each) TOP Class SSD can't be touched by cheap hardware as consoles, this is dreaming, be realistic.
 
22GB/s is max delivering of decompressor HW, not the SSD; SSD max TR is 5 GB/s (sequential, on a big big file?) and realistic 7-9GB/s at best, when (de)compressed with kraken HW-dec
and we don't know how it is cooled (SSD performances depends on temperature), and how it performs with little files (such as tiles of textures adressed directly on SSD etc) or when read and write requests compete for resource

a datacenter with 10-20 2.000$+ (each) TOP Class SSD can't be touched by cheap hardware as consoles, this is dreaming, be realistic.

Oh I am aware that the PS5 SSD has max 5.5GB/s. Even so, with a PC 7GB/s SSD, would a drive be able to run 9GB/s of compression when the PC CPU is compressing the data i.e. would a CPU be quick enough to compress data to saturate the 7GB/s SSD with it's driver/OS/filesystem overhead? Remember that the PS5 SSD controller is specifically designed to compress, on PC's the CPU would have to compress and is likely less efficient than the PS5 controller. Maybe in a year or so, someone will release a more efficient program to compress better than the PS5 controller.

We know XBSX does a better job at compression, or at least it seems to. 2.4GB/s to 4.8GB/s is a 100% gain. The PS5 has 5.5GB/s to 9GB/s is a 63% gain. But then again the little gain on PS5 might be due to the RAW speed of the SSD and that it might be easier for a slower SSD to gain more compression data. I don't know if I am expaining this correctly so that you understand. The higher you go, the less gain you get in other words.

Or the compression engine is just worse than the XBSX compression engine.
 
Oh I am aware that the PS5 SSD has max 5.5GB/s. Even so, with a PC 7GB/s SSD, would a drive be able to run 9GB/s of compression when the PC CPU is compressing the data i.e. would a CPU be quick enough to compress data to saturate the 7GB/s SSD with it's driver/OS/filesystem overhead? Remember that the PS5 SSD controller is specifically designed to compress, on PC's the CPU would have to compress and is likely less efficient than the PS5 controller. Maybe in a year or so, someone will release a more efficient program to compress better than the PS5 controller.

We know XBSX does a better job at compression, or at least it seems to. 2.4GB/s to 4.8GB/s is a 100% gain. The PS5 has 5.5GB/s to 9GB/s is a 63% gain. But then again the little gain on PS5 might be due to the RAW speed of the SSD and that it might be easier for a slower SSD to gain more compression data. I don't know if I am expaining this correctly so that you understand. The higher you go, the less gain you get in other words.

Or the compression engine is just worse than the XBSX compression engine.

well, just remember that SSD performances depends on temperature, this is why I want to see it in action before say what is fast and what is not, we are talking on theoretical level, not real, about a cheap SSD closed in a little warm box as a console. maybe it will work fast, maybe good, maybe decent. I don't wanna downplay expectations, only be careful.

Maybe the two SSD have different approach, sony maxxed the sequential real (5.5GB/s) to load faster big assets (like those on UE5 techdemo), maybe Microsoft tuned SSD to be faster with little files to work better with mesh shaders and little tiled resource. few giant assets vs a lot of tiny tiles of assets.
We don't know what solution will work better, I suppose that it's wise to say that a techdemo tuned on PS5 will run better on PS5, a techdemo tuned on XSX will runs better on XSX,there's no need to compare the two consoles.
PC are a different world, saying that PC can't reach this detail level in UE technology because of the I/O overhead is simply false, even Unreal Engine 4 have shown a better detail-level, with static geometry demos (no enemies, no vehicles, no trees, no IA, net code etc etc game code is different)




indeed, UE5 will shine on all the plaform, this is sure, and we'll see a lot of games using its tech, we should focus talking about the tech, instead of compare hardware platforms, ssd, and so on. Just my two cents.
 
Just to add to that, the PS5 won't be transferring data at 22GB/s even 5% of the time, Cerny said best case scenario so that might be 0.001% of the time. Cerny says Kraken is 10% more efficient but with the compression engine in best case scenarios delivers over 22GB/s. So let's say the compression engine makes Kraken 10-30% more efficient(remember we won't know how efficient the engine will make it, it could be 50%, it's could be 12%.

But let's do some quick math and 9+10% is 9.9GB/s. 9+30% is 11.7GB/s.

Also note that this would be textures only, would Kraken be used for other data other than textures?

To contrast with XBSX, the XBSX uses BC for texture compression is it seems more efficient. I think I read somewhere that it's up to 30/40% more efficient than Kraken's 10% gain.

4.8GB/s + 40% is 6.72GB/s. Again this seems to align with what MS said about the SSD and the BC compression ratio i.e. over 6GB/s.
 
Back
Top