Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Did anyone else notice how Digital Foundry made a very obvious point of noting that the clocks for XBSX were locked? This makes me wonder if MS thinks Sony will use boost clocks. If the rumor of sub 10TFS is true this would be a pretty genius move by Sony to help bolster their specs, at least on paper.
 
Did anyone else notice how Digital Foundry made a very obvious point of noting that the clocks for XBSX were locked? This makes me wonder if MS thinks Sony will use boost clocks. If the rumor of sub 10TFS is true this would be a pretty genius move by Sony to help bolster their specs, at least on paper.

Boost clocks on consoles won't work well? Consistent performance across the board might be a problem?
Also, i doubt they care about XSX being more performant in the teraflop department, they probably had other priorities like smaller chip, a more console style design, lower cost (399 worked well for them), and a single sku plan from the beginning.
 
Boost clocks on consoles won't work well? Consistent performance across the board might be a problem?
Sure it can help but it's not sustained performance. So if Sony is pushing their GPU over 2GHz in boost mode it will raise their TF number even if it can only be sustained for short bursts and isn't really representative of what their GPU can do. It would be a smart way for them to have better specs on paper and blurs the line slightly when comparing their console to the XBSX. Basically in this scenario Sony could just reveal only their boost clock TF # and not mention what their actual sustained clock gets them.
 
Last edited:
12tf was already leaked by the usuals MS friends more than one year ago. But who guessed slightly above 12tf and 2GB/s ? None except Kleegamefan.

Depending on how things turn out for PS5, there might be many who guessed correctly if they ever said both consoles were close. Depending on actual definitions of close and actual PS5 numbers. We'll just have to wait (and wait (and wait)) to find out. ;)
 
12tf was already leaked by the usuals MS friends more than one year ago. But who guessed slightly above 12tf and 2GB/s ? None except Kleegamefan.

Nope, still no hard numbers (just above 12), which was impossible even, they probably finalised clocks a month ago, max. Also, 2GBs is wrong, it is supposed to be 2.4gb/s raw and 6gb/s with this extra decompression block.

Sure it can help but it's not sustained performance. So if Sony is pushing their GPU over 2GHz in boost mode it will raise their TF number even if it can only be sustained for short bursts and isn't really representative of what their GPU can do. It would be a smart way for them to have better specs on paper and blurs the line slightly when comparing their console to the XBSX.

I guess that is what Sony will do, XSX already clocks close to that, on 52CU's (4 disabled).
 
Also, 2GBs is wrong, it is supposed to be 2.4gb/s raw and 6gb/s with this extra decompression block.

I wouldn't use those numbers, I'd say 2.4 GB/s raw, 4.8 GB/s compressed, and decompression at 6 GB/s.

EDIT:
What's not clear to me is if the decompression block is used or can be used directly by the developers, or if the 1.2 GB/s headroom (6 - 4.8) is only there if they ever increase the SSD speed.
 
I wouldn't use those numbers, I'd say 2.4 GB/s raw, 4.8 GB/s compressed, and decompression at 6 GB/s.

Ok yes, point is that he wasn't right if he said 2gb/s. Had he said exactly 2.4Gb/s raw, and 12.155 (was it x.155?), it would give more credit.
Thing is, God (klee) isn't of the cards yet, but so isnt Gospell (github). We will have to see who had guessed right. I want to point out, it doesn't really matter if someone had it right or wrong, it's all speculation on different rumors leaks and methods. It's just fun though :)
 
I wouldn't use those numbers, I'd say 2.4 GB/s raw, 4.8 GB/s compressed, and decryption at 6 GB/s.

EDIT:
What's not clear to me is if the decryption block is used or can be used directly by the developers, or if the 1.2 GB/s headroom (6 - 4.8) is only there if they ever increase the SSD speed.
SSD speed (I/O) is 2GB/s rounded as the insider correctly predicted. So now, since a few hours, we must all use compressed (marketing) numbers even if we have being using actual I/O BW numbers (the real amount of bytes that pass by the controller) for years ? :rolleyes:

Are we going to use the 25tf number announced by Microsoft (and happily printed by DF) too ?
 
Ok, I see you were starting from the 14Gbps module now. I was trying to ask how you'd reconcile the 528Gb/s and 560Gb/s numbers.

Several people already thought there might be 10 chips based on the Scarlett video at E3, including @Proelite
Interesting xsx ram spec is 560GB/s.
Real world measured value won’t be higher.

OBR B0 already has 528GB/s measured value.


It seems that 11.6TF in PS5 (from Odium) seems reasonable.
 
Lol, really? After all the shit you kept kicking up against anything other than Github!?

I still go after data, not insiders. It doesn't matter if your wrong, or if i am. That's the point. Just as you are kicking on everything except an insider :)

So it seems that 11.6TF (from Odium) seems reasonable.

With such high clocks possible, the verified 15TF leak makes much more sense, above 2Ghz with 60CU's is more realistic.
 
Dont Fret. Sony will push their compressed numbers too, which looks to be well above 7 GB/s as @chris1515 has been saying. So something loading in 8 seconds on SeriesX may only take 5 seconds on PS5. *That depends on actual Sony numbers and how fast their decompressor is.
I doubt it. Compression was already used on PS4 games (from Blueray) using a zlib decompression block (a real dedicated block then), they never ever stated the compressed numbers in their PR or elsewhere. To reach those conflated numbers Microsoft most probably use GPU decompression, as hinted by one MS insider on era, so that shouldn't be realistically useable for streaming, only loadings, just read closely their PR:

offloads decompression work from the CPU [no mention of GPU]...The decompression hardware [vague enough] supports Zlib for general data

Anyways:
SSD -> using conflated compressed numbers from 2.4 to 7.
GPU -> using an incredible 25tf (RT equivalent) number, the new FP16 metric applied to RT

Seems like they needed to seriously conflate their numbers in those 2 metrics (not for CPU!), I wonder why ? because those numbers should already sound awesome, amirite ? ;)
 
Status
Not open for further replies.
Back
Top