That's a tiny, tiny write stream.
Ok. Sorry. But I would still do a perf counters check.
Also reading from flash needs 50% of voltage compared to writing, AFAIR.
And even more and longer for erase.
All of the current PC SSD drives are...
If the game pages are not in RAM already? Kinda doubt that.
It's much faster than Optane. On paper.
Controller overhead. File system overhead. Etc.
AFAIK DirectX Storage minimal read/write block...
No. Going exclusive.
And I'm not talking about harder, just not similar.
If you have two very similar platforms, the most viable business solution...
I'm saying that in current gen it's impossible to optimize equally well for all platforms.
Otherwise we would see less whining about Jaguar perf...
That's not what I'm saying.
Why not? MSFT was pretty bold about their BC claim.
Optimization is a losers game. Main platform matters a lot. Nothing else.
It's not PS2/XBOX...
Any game that runs without changes on XBSX.
Usual case of not a straight PC-port? Seems so. :)
No, I'm talking about "fixed" vs "variable" clock design.
But you know, if somebody talks "full BC" you immediately know that API is fat... :)
It's just common sense.
And PS4 Pro vs XB1X perf in...
Or because fatter API prevents you from ever getting that speed anyway...
They can, the question is will it have a better performance?
So far it seems it will not.
But that's what they wanted to solve.
Yep, that's why Xeons drop to 50% of the freq when AVXing.
While the desktop CPUs don't.
It is getting ridiculous.
That's all statistics. You know. When a fair coin flips head 100 times in a row, it's still pretty possible.
I suspect you're telling me that for...
That's a hardware interface.
But the main part is the software interface between controllers.
For example if all certified drives will need to be...
Nope. That's just a "happy path" GPU.
Simple platformers had similar issues AFAIR.
From the horse mouth:
"Developers don't need to optimise in...
Why? AVX suddenly runs at full speed on AMD CPUs all the time?
GPUs do not ever throttle because of the heat?
It's getting ridiculous.
Most of the problems in PS4 Pro "high heat" scenarios came from simpler bandwidth oriented code. Like the mentioned "HZD map screen".
Yep. But we don't really know what the interface between PS5 I/O controller and a certified SSD controller there will be.
And surely it will not...
It will be streaming into RAM. The hw decompressor works in RAM too, at least that's what the patent says.
I would be pretty impressed if any...
So for example the lowest clock is 800Mhz (which is probably the case). Then Cerny should have said that. Am I right?
Some people are delusional. 98-100% VALU usage in a game...
So much fun, indeed. All that async compute last gen was probably using the other 50%...
Dunno. Is there a PC-style block storage controller in PS5?
Unless these too have a custom FS with the block index fully in SRAM.
Which I doubt.
I don't see any 7GB/sec there.
It's 639.9MB/sec on the screenshot.
SFS/DXS pages are 64K though, so it's not clear what the real speed will be.
Probably. I was not aware that AMD groups SAs into SEs because there is nothing common in SEs.
4SAs nothing changed since 5700XT it seems.
I.e. 64 ROPS, 4 rasterizers, 4 primitive units, 4 L1 pieces (probably the same size), 1 Geometry...
4X16 = 64
Nope. 4 SAs overall for 5700XT.
"The new graphics L1 cache serves most requests in each shader array, simplifying the design of the L2 cache and boosting the available...
Less L1, ROPs and GE per CU.
Dunno. PC has a different set of trade-offs. And I don't think that streaming RAM->VRAM on PC has no impact on the GPU-accessible bandwidth.
Map screen in HZD is also doing a lot of work per cycle. Is it really needed?
Or even: should we optimize our TDP and cooling for that particular...
But that's not how CPU should work in a game though...
In the end what is rendered is a final result, therefore CPU needs to work on the same...
They may, but will they target XBSX as a baseline?
Pretty close to RAM random read on PC.
Where you cannot allocate-place things in RAM anyway....
But it still means no "2080 as a baseline" until at least 2021 for XBSX.
1. It's the same story. They wanted to change the PS4 story.
2. God of War is not a max load. Unless you have any real utilization numbers I would...
I'm not sure yet. This year it's all about Xone baseline for MSFT.
Dunno. What I've seen is much more of the other side: SSDs in PCs are useless for gaming therefore PS5 is made by fools.
It can. There is no free lunch. Which means most of the "latency-sensitive" work CPU does should be limited by things in its cache.
CPU that spams...
Posted on March 30th. A genuine fool.
I'm not sure we need to explain it in a 100th time. It was perfectly fine the first 10 times, but now it's getting ridiculous.
The had problems...
You will get a lower mip. Like mip 2 will be 4K.
Cache. You still use it.
Unless you wanna do RT. then your cache is busted by random access all...
That's all cool and stuff.
But do you remember that for oblique angles the mip 0 is used on a pretty small portion of the screen?
All the farther...