Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
No one is developing a game for a 2080ti as the baseline afterall.

Since XSX is not too far from that 2080Ti, we actually have a 2070/2080 baseline minimum with the next gen consoles. A 3080Ti will only enable you to have higher settings, framerates and resolution, most likely also RT performance.


Now the jaguar based next gen consoles make sense, who needs powerfull CPU/GPUs when you can stuff in an SSD :D

Or even 2080ti region.

And beyond ;)

An SSG version of a gaming card could be a pricier way to get a storage system into PCs with similar parameters to the customized console subsystems. It wouldn't require mass replacement of all the systems where the CPU doesn't have built-in compression and extra DCMAC hardware and the motherboard lacks a PCIe 4.0 NVME slot. There would need to be some transfers over the PCIe bus to the graphics card, but those could be limited to swapping a game's asset partition in and out rather than constant transfers.
It'd be a value-add for AMD's hardware, at least.

Makes sense as it's not uncommon that professional hardware solutions make it into gaming products.
 
An SSG version of a gaming card could be a pricier way to get a storage system into PCs with similar parameters to the customized console subsystems. It wouldn't require mass replacement of all the systems where the CPU doesn't have built-in compression and extra DCMAC hardware and the motherboard lacks a PCIe 4.0 NVME slot. There would need to be some transfers over the PCIe bus to the graphics card, but those could be limited to swapping a game's asset partition in and out rather than constant transfers.
It'd be a value-add for AMD's hardware, at least.

Would it be possible for SSG to work in concert with an external PCIe 4.0 NVME based SSD if coupled with a Zen2 based system? As I understand it the SSD would communicate directly with the IO die on the CPU but I'm unclear if it could pass the data straight through from there to the GPU memory (over the PCIe 4.0 16x link) without going via the system memory and CPU as I understand is the case for the XSX (and presumably the PS5).
 
Cerny said in the presentation that the access to the game assets is mapped, and the dev doesn't even need to know if or how it's compressed, they address the virtual uncompressed data layout, it's all transparent.

Of course they need to know it comes from a 2.4GB/s or 5.5GB/s drive. Can't defy the laws of logic.

thanks, this was my understanding

I can't tell if serious anymore.

No SSDs will not increase your framerate by 50%

unless it was back in the days of Day Z mod ;)
 
I'm not sure we need to explain it in a 100th time. It was perfectly fine the first 10 times, but now it's getting ridiculous.
100 times inaccurately.
They had problems maintaining 2+3 for a maximum possible load. Because it's a something that nobody does in hardware world.
Incorrect. Fixed clocks are common. Exhibit A) PS4 - While under max load (ie. God of War), the CPU and GPU will never dip below or go above 1.6GHz and 800Mhz respectively. Developer is guaranteed that performance. Heat and fan noise is another story.
XBSX claims that they do it, but that claim should be scrutinized, because that claim is unrealistic, not Sony's one.
Incorrect. Claim is normal and there is no basis for your assertion.
Sony's claim is perfectly fine: when power hungry operations are used too much the GPU or CPU underclocks.
Agreed. Except for the phrasing of used too much. It's not a scenario where you can use 100% of the chip for short periods and then it slows down. The closer you get to 100% the more it slows down. The thing which hasn't been disclosed, and the real point of contention is what are the clocks at 100% utilization. The numbers that make sense are 2 and 3, because that is what Cerny himself indicated they would be had they opted for "fixed" clocks.
It was a case for every GPU and CPU till now.
Incorrect. While variable clocks in and of themselves aren't new, in the console space clocks are generally deterministic with a predefined consistent speed. PS5 keeps the deterministic part in a way, by ignoring heat which is how things in the PC space generally operate. But even there, the min clocks under load are disclosed. The issue with Sony, is they have failed to disclosed those max load clocks, and so folks cling to the "most of the time" and "a couple percent" statements, which have no context of load to go with them.
 
While under max load (ie. God of War), the CPU and GPU will never dip below or go above 1.6GHz and 800Mhz respectively. Developer is guaranteed that performance. Heat and fan noise is another story.

1. It's the same story. They wanted to change the PS4 story.
2. God of War is not a max load. Unless you have any real utilization numbers I would place GoW under 50% utilization, why not?

Incorrect. Claim is normal and there is no basis for your assertion.

What about AVX? CPU never slows down even on 100% AVX? Unrealistic.

The thing which hasn't been disclosed, and the real point of contention is what are the clocks at 100% utilization.

So, you're telling me that XBSX clocks under 100% utilization are stable as rock? Really? Unrealistic again.

The issue with Sony, is they have failed to disclosed those max load clocks

I'm not sure where I can find any "max load" clocks for other hardware? I'm not talking about benchmarks, just the real max load numbers from the vendor. Any examples? "Base clock" is not it.
 
I can't tell if serious anymore.

No SSDs will not increase your framerate by 50%

I would not be so shure about that, although I would prefer not to advance a number...

But take a look at this example of frustum culling:

(Not managing to place an animated gif, so I’ll leave a link)

https://giphy.com/gifs/linarf-xUPGcgiYkD2EQ8jc5O

If an SSD allows you to avoid processing anything outside the cone of vision, you will save a lot of processing power.
And this is a good example of frustum culiing. Other games are not so efficient.
I cannot say how much processing power you could save here if an ssd would reduce the outside part to a minimum. But looking at that image there are moments where it looks the outside part is almost as much as the inner parts.
Another case is when you are limited on geometry by the HDD speed. An SSD could allow for extra geometry. This would not be an performance boost, but it would count as such due to the increased visuals.
And an extra fast SSD can even do both.
 
Last edited:
I would not be so shure about that, although I would prefer not to advance a number...

But take a look at this example of frustum culling:

(Not managing to place an animated gif, so I’ll leave a link)

https://giphy.com/gifs/linarf-xUPGcgiYkD2EQ8jc5O

If an SSD allows you to avoid processing anything outside the cone of vision, you will save a lot of processing power.
And this is a good example of frustum culiing. Other games are not so efficient.
I cannot say how much processing power you could save here if an ssd would reduce the outside part to a minimum. But looking at that image there are moments where it looks the outside part is almost as much as the inner parts.
Another case is when you are limited on geometry by the HDD speed. An SSD could allow for extra geometry. This would not be an performance boost, but it would count as such due to the increased visuals.
And an extra fast SSD can even do both.
lol.
sure I guess. If I/O is the limiter.

It's not going to make your 1080 operate like a 2080 S though.
 
And yet that is exactly what a lot of people have been saying in the forums about PS5.

Really I haven't seen that anywhere? I don't think anyone that can be taken seriously has suggested the SSD can make up for the computational difference between the consoles.

The total disregard for it though is more odd to me though, especially when you have devs excited by it.

But then again I don't really give much time to forums or posters who behave like children in a sandpit so maybe I haven't seen what you have.
 
lol.
sure I guess. If I/O is the limiter.

It's not going to make your 1080 operate like a 2080 S though.

Just didn’t understood the lol.
I gave an example... It's a valid one!
Just don’t forget That with an SSD you can change the way you create games, putting all the GPU power on what’s on screen and relying on it to stream the rest.

But want to get out of that example? What about it beeing used as virtual RAM. Like when Ray tracing starts creating those large data sets that grow exponentially with adicional geometry and high resolution textures, using gigantic amounts of memory, and limiting you on what you can have on screen and what you can do with it (RT) due not to GPU performance, but to memory limitations.
 
Last edited:
Just didn’t understood the lol.
I gave an example... It's a valid one!
Just don’t forget That with an SSD you can change the way you create games, putting all the GPU power on what’s on screen and relying on it to stream the rest.

But want to get out of that example? What about it beeing used as virtual RAM. Like when Ray tracing starts creating those large data sets that grow exponentially with adicional geometry and high resolution textures, using gigantic amounts of memory, and limiting you on what you can have on screen and what you can do with it (RT) due not to GPU performance, but to memory limitations.


Lets see, PS5 best case SSD is 9GB/s right? Whereas PS5 GPU has 448 GB/s BW and Xbox 560GB/s to the 10GB.

So 1/45 and 1/56 or so. PS5 SSD does not even do what Xbox 360 and PS3 GPU BW were doing IIRC (20-some GB/s IIRC, 2X ~25 for PS3, ~25+EDRAM stuffs for 360)

So if you made a GPU of 10+ TF with even just 200GB/s or 300GB/s, you'd be accused of being majorly BW starved. What is 9 GB/s in that ocean?
 
Last edited:
Just didn’t understood the lol.
I gave an example... It's a valid one!
Just don’t forget That with an SSD you can change the way you create games, putting all the GPU power on what’s on screen and relying on it to stream the rest.

But want to get out of that example? What about it beeing used as virtual RAM. Like when Ray tracing starts creating those large data sets that grow exponentially with adicional geometry and high resolution textures, using gigantic amounts of memory, and limiting you on what you can have on screen and what you can do with it (RT) due not to GPU performance, but to memory limitations.

RT structures take up 1 GB of VRAM IIRC according to the tests done by DF (@Dictator). So it wouldn't work quite like that.
But using your example: it's much more likely that the processing power of the GPU would die under such heavy load well before the I/O was the limiter, and an SSD could never keep up as Virtual Memory for a GPU to write results off to. It wouldn't last very long either considering how often we modify the contents in memory.

I mean, we have largely gone all the way to 9TF without needing an SSD. And people have been using them in the PC space since 2007 maybe (about 320 GB, I think 256GB was what I was looking at as a boot drive).
We're now just hitting 2020 where we know the slow HDD is the limiter here for those high quality textures that we needed for the higher end resolutions, which required higher level bandwidth, which required a lot more processing power.

Any modern SSD would be enough to break any sort of I/O limit for some time let alone the ones in the console space. And as we were speaking on the topic of bandwidth (asymmetric) earlier, we're talking needing bandwidth well above 500 GB/s for demanding complex scenes with ray tracing. These SSDs aren't anywhere close.
 
Status
Not open for further replies.
Back
Top