Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
It could. I use decompression on GPU for my work, we do a lot of work with parquet format, and there are multiple ways to compress and decompress. I don't know if you can go from hard drive to GPU directly, I thought you'd need to pass through memory first on the bus but I could be wrong.

DirectStorage is designed for SSD... but it is a direct to GPU solution... thus "Direct" "Storage".Its a development that comes from AMD's Solid State Graphics tech.

 
Last edited:
DirectStorage is designed for SSD... but it is a direct to GPU solution... thus "Direct" "Storage".

That is not how it's described.

DirectStorage – DirectStorage is an all new I/O system designed specifically for gaming to unleash the full performance of the SSD and hardware decompression. It is one of the components that comprise the Xbox Velocity Architecture. Modern games perform asset streaming in the background to continuously load the next parts of the world while you play, and DirectStorage can reduce the CPU overhead for these I/O operations from multiple cores to taking just a small fraction of a single core; thereby freeing considerable CPU power for the game to spend on areas like better physics or more NPCs in a scene. This newest member of the DirectX family is being introduced with Xbox Series X and we plan to bring it to Windows as well.
 
Some electromagnetic radiation, though much of it isn't going to get past the metal the chip is under or the shielding all around the active components of the console.
There could be some mechanical energy related to how the components may shift or change size as a result of thermal cycling, with potentially permanent changes due to deformation or cracking indicating a permanent change in the energy in those layers.

At the silicon level, tiny amounts of work are done when electrons can tunnel across barriers and become stuck, altering the charge for a time. Metal atoms in the interconnect and dopant atoms in the silicon will travel short distances under the influence of voltage potentials. These are part of the eventual degradation and failure of the silicon, assuming nothing else fails before they do. The amount of energy it would take to permanently shift the threshold of a transistor or cause 40nm wires to form gaps or a short would be a tiny amount spread out over a decade or more, but non-zero.

If someone wants more knowledge on electro migration being described above, see here:

 
$100 more for 2 - 3 times more power is always worth it in pure terms, but if the software is not designed from the ground up to take advantage of it then it’s not worth it. Why would l go to the trouble sell my ps4 to upgrade to the pro? I didn’t think it was worth the trouble. But if the games were vastly different in quality then l would’ve.

Everyone would pay $100 more for a 2x - 3x improvement.
It’s the easiest equation ever...
30% more cost for 200% - 300% more performance.
As a company if you can’t sell that then..........

If that were the case then PS4-P and XBO-X would garner the majority of sales over their base counterparts, but that isn't the case. The XBO-X is over 300% more performant than the XBO-S and generally about 100 USD more and the XBO-S sells significantly more units.

Price is important to the majority of console buyers. It's also one of the reasons why the majority of console buyers are unlikely to ever buy a new gaming PC.

Regards,
SB
 
Last edited:
DirectStorage is designed for SSD... but it is a direct to GPU solution... thus "Direct" "Storage".Its a development that comes from AMD's Solid State Graphics tech.


That's just speculation on what they think MS are doing. What MS are doing might be related to that with improvements for the console space...or it might not be. We don't have enough details to make any sort of call on it at the moment.

Regards,
SB
 
That sort of data is completely beyond anything outside of the engineering laboratories though. Sure, they could list power details and test cases and frequencies and whatnot that came up in the research, but expecting them to share that info...?? I feel "the dog's got the bone" is kinda happening here - people who'd like to know specifics are starting to stretch their expectations beyond what's sane instead of just accepting the talk given was as much as make sense factoring in an unknown future. In the past, we had clock speeds and bandwidths and everyone was happy. We never had any specifics on efficiencies.

Like, look at the SSD discussion and whether PC SSDs will be fast enough for next-gen games. We get specs like "7 gb/s" being talked about. No-one asking the manufacturers for "how often do we hit those peak speeds? What are the lowest transfer speeds?" There's no 'Spanish Inquisition' on details with anything except Sony's clockspeeds. What about MS's raytracing figures. "25 Teraflops of raytracing performance." How often? What are the bottlenecks? What are the typical attained performance rates?

This interrogation of Sony's clock speeds is reaching beyond sensible discussion into the ridiculous. It's a discussion point that's running away with itself; I think everyone needs to calm-down and recalibrate their expectations! We're not going to get Sony or AMD's research data.
Shift, it's not an interrogation or attack, and not expecting anyone here to have the answer. Please accept my apologies if I came across that way, or the need to defend Sony or respond at all. At least not on my account. I just happen to believe asking about a components base clock, when it's certainly known by the manufacture is a fair and legitimate question. I know I wouldn't feel good about buying a GPU with an advertised clock of 2GHz, but under load it was designed to throttle down to 1GHz without any disclosure of that information. Not saying that's the case here, but just providing as an example for the point of view. No one needs to agree with it. And I'm not asking for some esoteric research lab data. Every PS5 is going to ship with the exact same clock setting presets to use at various activity monitor levels. It's a simple curve with a top end frequency which can be maintained all the way up to a certain activity level, through a range, down to a bottom end frequency at the max possible activity level. It's predefined, known, and baked into every PS5. And I know we aren't going to get that from Sony unfortunately, but guessing it will get leaked at some point.

The argument I'm hearing is that, if they released that information, there would be no context to align actual game loads to that activity level curve. That perhaps worst case game loads would never reach those levels, and so the fact that those settings exist at all are meaningless. And I think that's the thrust of the point Cerny is trying to make when he says he believes the system will run near those max clocks most of the time, which I agree with if you were to profile a broad range of titles. But we know something like God of War for instance can push a PS4 Pro to something like 170+ watts, which is right at the limits of system TDP. So while an outlier, getting close is definitely possible. And as such, worth understanding how it's configured to run in such conditions. Just my opinion of course. I won't say anything more on it. Hoping you won't judge me to harshly in the future for having that perspective.

Thanks for all your contributions here, they are most appreciated. Respectfully.
 
If that were the case then PS4-P and XBO-X would garner the majority of sales over their base counterparts, but that isn't the case. The XBO-X is over 300% more performant than the XBO-S and generally about 100 USD more and the XBO-S sells significantly more units.

Price is important to the majority of console buyers. It's also one of the reasons why the majority of console buyers are unlikely to ever buy a new gaming PC.

Regards,
SB

I think they're confusing a unique issue in the PC space and projecting it onto console. My understanding with those Pro Radeon gpus is that they have an SSD on the graphics card so that it can read from the SSD directly into VRAM, bypassing main RAM and copying data over the pcie bus. Console doesn't need that, because there's one pool of memory. DirectStorage looks to be about latency and cpu overhead, not reading directly into GPU cache, which would be a bad idea anyway.
 
I think they're confusing a unique issue in the PC space and projecting it onto console. My understanding with those Pro Radeon gpus is that they have an SSD on the graphics card so that it can read from the SSD directly into VRAM, bypassing main RAM and copying data over the pcie bus. Console doesn't need that, because there's one pool of memory. DirectStorage looks to be about latency and cpu overhead, not reading directly into GPU cache, which would be a bad idea anyway.

I guess we will have to see. The implementation of the AMD SSG looks to be exactly the same as MS's stated 100GB SSD mapping direct to GPU implementation.
 
That's just speculation on what they think MS are doing. What MS are doing might be related to that with improvements for the console space...or it might not be. We don't have enough details to make any sort of call on it at the moment.

Regards,
SB
I'm confused. MS said their implementation has this feature. Regardless of its origin, the 100GB to GPU feature is central to their XVA architecture.
 
That sort of data is completely beyond anything outside of the engineering laboratories though. Sure, they could list power details and test cases and frequencies and whatnot that came up in the research, but expecting them to share that info...?? I feel "the dog's got the bone" is kinda happening here - people who'd like to know specifics are starting to stretch their expectations beyond what's sane instead of just accepting the talk given was as much as make sense factoring in an unknown future. In the past, we had clock speeds and bandwidths and everyone was happy. We never had any specifics on efficiencies.

Like, look at the SSD discussion and whether PC SSDs will be fast enough for next-gen games. We get specs like "7 gb/s" being talked about. No-one asking the manufacturers for "how often do we hit those peak speeds? What are the lowest transfer speeds?" There's no 'Spanish Inquisition' on details with anything except Sony's clockspeeds. What about MS's raytracing figures. "25 Teraflops of raytracing performance." How often? What are the bottlenecks? What are the typical attained performance rates?

This interrogation of Sony's clock speeds is reaching beyond sensible discussion into the ridiculous. It's a discussion point that's running away with itself; I think everyone needs to calm-down and recalibrate their expectations! We're not going to get Sony or AMD's research data.

the variable clock speeds are concerning because how often they’re often bullshit hypotheticals on PC. This being PC parts from the same company, you can understand the skepticism. To me, Most of cerny’s explanations have the same tone as this guy:


Sounds great but not reality.
 
I'm confused. MS said their implementation has this feature. Regardless of its origin, the 100GB to GPU feature is central to their XVA architecture.

I think you are correct. Reading up more on Radeon SSG it sounds like the GPU can read straight from SSD into cache. Doesn’t need to copy to VRAM first. I’m curious how much lower the latency is than a typical request that would go through the cpu.
 
the variable clock speeds are concerning because how often they’re often bullshit hypotheticals on PC. This being PC parts from the same company, you can understand the skepticism. To me, Most of cerny’s explanations have the same tone as this guy:


Sounds great but not reality.

PBO has nothing to do with this and is not even used in consoles. 3.5GHz clock speed is also far from BS hypothetical on PC, it's very much a reality.
 
PBO has nothing to do with this and is not even used in consoles. 3.5GHz clock speed is also far from BS hypothetical on PC, it's very much a reality.

The concepts of variable clock speeds using a power budget absolutely applies. Basically it allows a lot of marketing BS to creep in as we have seen repeatedly and that video being a shining beacon of it.

Oh and I expect intel to do the same going forward with their TVB3 trash.
 
Last edited:
One thing to be careful about when looking at Steam hardware survey stats that I've said many times in the Steam Hardware Survey thread. China has a disproportionate share (at least 1/3 of Steam users are in China) of the survey such that the numbers aren't reflective of hardware install base in Western countries. And a large majority of machines in China are lower spec using used hardware that is recycled from western countries. Chinese PC companies are the #1 customer for recycled computer hardware in the west. Basically when a PC is recycled either its parts are purchased in bulk by Chinese companies that resell computers to Chinese consumers or they get scrapped.

Regards,
SB

Assuming Stream is a good market representation, iIf 1/3 of the steam costumers are in China, ignoring them would also be disregarding 1/3 of the PC potencial market. Would that be a good thing, specially if we consider that a big part of the remaining market will also be underpowered?

Just questioning... Not trying to make claims!
 
You can broadly estimate a potential hardware base for a given GPU spec. Pick your GPU, find the proportion of users with that GPU or better (bit of faf) and multiply that by active accounts (about 100 million). That's quite convenient at the moment, roughly x% of hardware will be x million users. So 6 million 8 core CPUs. 48 million 4 core CPUs. Well over 40 million GPUs of1050 spec.

I guess the overview of the top-end performance is that current consoles represent a significantly larger size of 'performant' hardware than PC, with PC having a more powerful top end that's only a few million strong.
 
You can broadly estimate a potential hardware base for a given GPU spec. Pick your GPU, find the proportion of users with that GPU or better (bit of faf) and multiply that by active accounts (about 100 million). That's quite convenient at the moment, roughly x% of hardware will be x million users. So 6 million 8 core CPUs. 48 million 4 core CPUs. Well over 40 million GPUs of1050 spec.
What is difficult to factor in is how many GPUs have been bought not for gaming but for the compute capability. CSS eats compute for breakfast so designers and editors have machines with gaming GPUs, as do bit miners, as do server farms. I think there remains a degree of wetting your finger and sticking it in the air.
 
current consoles represent a significantly larger size of 'performant' hardware than PC

Kinda shocking that so many would be on lower then hd7850, tablet cpu’s. Badically 2006 to 2012 mid end hardware. Il go with the poster that once told me steam surveys cant be trusted.
 
What is difficult to factor in is how many GPUs have been bought not for gaming but for the compute capability. CSS eats compute for breakfast so designers and editors have machines with gaming GPUs, as do bit miners, as do server farms. I think there remains a degree of wetting your finger and sticking it in the air.
How many of those are running Steam on their workstations though? If they're running Steam, one kinda assumes they are playing games on those PCs too.
 
Status
Not open for further replies.
Back
Top