Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
On the other hand AMD doesn’t announce PS5 using HW RT. It’s a sign that SONY uses other solution such as POWER VR.

How realistic is that? Such "other" 3rd party IP solution would need to share memory bandwidth with the CPU/GPU or would require its own large high speed cache on chip. As I don't know how the GPU cache structure actually works both of these approaches might require a lot cache flush coherency transactions.

I only expect some AMD internal solution to be fully integrated in the GPU's own cache infrastructure.
 
Cause people who work at development places are prone to use 4Chan ? Or is that from 8Chan ?
OsirisBlack over at NeoGaf seemed to suggest this is a controlled leak.

OsirisBlack said:
He wont get fired, a lot of shit is being dropped purposely.

Had to check with my other source because its kind of easy to pinpoint who he is.

He also seemed to suggest he knows who the guy is that released the 13.4 tfs (CPU: 8core Zen2 16threads 3.2ghz, GPU: 56 rdna cus @1870mhz (13.4tf), RAM: 18gddr6 (3gb for os), SSD nvme 1tb) stats in the same 4chan post chain.

OsirisBlack said:
The specs guy is a system and game engine designer that works for a company. He used to regularly post here and was run off.
I wouldn't be surprised if this was one of his sources. Obviously he's been wrong before so take with a grain of salt.
 
Are we going to expect a $599 PS5 then?
The reason why some people think XSX is more powerful than PS5 not just due to the github leak, it's more due to they expect XSX will be more expensive.
But sure there are some people really want PS5 is more powerful than XSX not matter the price, or they think XSX will include some Kinect like thing again(barbeque feature anyone?) so PS5 can be same or cheaper price and more powerful.
 
How realistic is that? Such "other" 3rd party IP solution would need to share memory bandwidth with the CPU/GPU or would require its own large high speed cache on chip. As I don't know how the GPU cache structure actually works both of these approaches might require a lot cache flush coherency transactions.

I only expect some AMD internal solution to be fully integrated in the GPU's own cache infrastructure.
It's reasonable enough to explore. Cerny said they have hardware accelerated ray tracing. The github leak did not indicate they have it.
That leaves to obvious possibilities,
a) the test was non comprehensive (thus it's AMD and likely the same as MS)
b) it's a different vendor solution
c) they don't have it.

A & B are the most probable. But we have very weak data in option (b); linkedin resumes and job postings. This leaves (a) as the most probable since it should be the easiest to implement and if AMD can provide it for MS, it should be provision-able for Sony.

B goes against Occam's Razor without more data points. Seems like wishful thinking on behalf of others hoping there is a 1 uppage. But doesn't seem reasonable.
 
To be specific it was 18GB DDR6. :LOL:
Mixing up DDR and GDDR is a classic. I also have problems with 18GB, as that implies 288/384 bit bus, which implies uncommon / non-existent GDDR6 clock speeds. Also so far none-existent GDDR6 density.
 
Last edited:
You mean, like the "Github Leak" which has reached a level of obsession that borders on jihad? Where 9.2TFLPs or 36CU or 2.0Ghz has been repeated on practically every one of the last 100 pages of this thread as if they were fact? I too don't mind exploring these possibilities but hopefully this is all over soon.

Just imagine for a sec, what if that leak had said 15TF of power? No-one would be offended by it.
 
Just imagine for a sec, what if that leak had said 15TF of power? No-one would be offended by it.

The GitHub leak is not only by and large the best leak we've got, it's also the most comprehensive leak ever for a console across all generations.

Even the leaked Durango whitepapers in January 2013 was not this detailed

In terms of the information leaked before launch ranked in terms of wealth of info:

1. Github leak
2. Durango SDK / dev docs
3. Orbis dev docs
4. 360 architectural diagram
5. VG leaks Orbis spec
6. Reddit Xsx, Xss specs leak Jan 2019
7. Durango and Orbis pastebin
8. Gonzalo / flute leaks.
 
Last edited:
But sure there are some people really want PS5 is more powerful than XSX not matter the price, or they think XSX will include some Kinect like thing again(barbeque feature anyone?) so PS5 can be same or cheaper price and more powerful.

There are? In this thread?
Funny, what I see is people repeating the github leaks (i.e. The Gospel) ad nauseam as some sort of rock solid super definite proof that the PS5 will turn out significantly slower than SeriesX, with zero percent chance of it bringing anything other than 36 CUs at 2GHz max, and making their presence in the forums a personal crusade to discredit every insider/source/rumor/leak that suggests otherwise.

And the cherry on top are the ones who accuse people who dare to entertain the veracity of any leak that isn't 100% defined by The Gospel as being in denial and living a wet dream.

I guess we're visiting different forums.



As for me, I'd rather have both consoles getting identical capabilities and have them compete in games + services + features + gamer QoL, as that's where I think I stand to gain the most.
Probably because I'm just a consumer and I don't get paychecks from microsoft or sony.


Just imagine for a sec, what if that leak had said 15TF of power? No-one would be offended by it.
Lol, the reason this thread was even created was because rumours of the PS5 being more powerful than Scarlett started appearing. That offended some people so much that they felt the need to contain that discussion away from the main thread for console predictions.

Just go check the first page in this thread and see for yourself.
 
Last edited by a moderator:
Just imagine for a sec, what if that leak had said 15TF of power? No-one would be offended by it.
Just imagine if the XSX was 8.3TF, a 9.2TF PS5 rumor would be howled down as unrealistic fanboy drivel. And yet, 9.2TF would in reality be just as unrealistic then as it is now.
 
To be specific it was 18GB DDR6. :LOL:
Mixing up DDR and GDDR is a classic. I also have problems with 18GB, as that implies 288/384 bit bus, which implies uncommon / non-existent GDDR6 clock speeds. Also so far none-existent GDDR6 density.

Well... you could construct the RAM amount with mixed densities of GDDR6 on a 384-bit bus (12 chips, non-clamshell).

6 x 2GB = 12GB
6 x 1GB = 6 GB

Memory access/bandwidth would be weird for the larger capacity chips. I probably discussed this before for a supposed 320-bit configuration to hit 16GB.


What was the rumoured bandwidth listed? The link seems empty now.
 
Just imagine if the XSX was 8.3TF, a 9.2TF PS5 rumor would be howled down as unrealistic fanboy drivel. And yet, 9.2TF would in reality be just as unrealistic then as it is now.

But it is not, MS is going for a two tiered launch, which enables them to have a more powerfull box. They also went for a PC case basically, and probably rather high priced for a console. In the end it doesn't really matter cause even a 12TF GPU will be mid-range rather soon, as AMD and NV are launching new GPU's aswell, where they can go all out on the CU counts and clock speeds, and finally price.
 
Well... you could construct the RAM amount with mixed densities of GDDR6 on a 384-bit bus (12 chips, non-clamshell).

6 x 2GB = 12GB
6 x 1GB = 6 GB

Memory access/bandwidth would be weird for the larger capacity chips. I probably discussed this before for a supposed 320-bit configuration to hit 16GB.


What was the rumoured bandwidth listed? The link seems empty now.

18GB DDR6, 576GB/s.

A case of trying to inflating the numbers because bigger = better.
 
Can somebody point out the past marketing/PR statements which was so misleading that we can’t help but to continually get into these silly arguments on semantics?
 
Can somebody point out the past marketing/PR statements which was so misleading that we can’t help but to continually get into these silly arguments on semantics?
I could probably say during Launch Xbox Executives were making a lot of terrible claims.
I think a lot of people tried their best to slay X1X on being able to do 4K.
 
18GB DDR6, 576GB/s.

A case of trying to inflating the numbers because bigger = better.
I guess so.

It'd just be 12Gbps GDDR6 on 384-bit bus then. The 12Gbps bin might not be in the catalogue for the 16Gbit chips at Samsung, but that doesn't mean they can't just clock it at that speed (especially since 12Gbps is an option for the 8Gbit bins).

Anyways. 384-bit bus means a larger chip.
 
I guess so.

It'd just be 12Gbps GDDR6 on 384-bit bus then. The 12Gbps bin might not be in the catalogue for the 16Gbit chips at Samsung, but that doesn't mean they can't just clock it at that speed (especially since 12Gbps is an option for the 8Gbit bins).

Anyways. 384-bit bus means a larger chip.

Also 12gb density chips, which is also not at Samsung.

You can get 18GB 576GB/s on 320 bit or greater bus using mixed densities.
 
Also 12gb density chips, which is also not at Samsung.
That's where the mixed density configuration comes into play (as I tried to convey above) with a 384-bit bus.

edit:

That said, the I/O would dictate a minimum chip border, and I'm not sure if we know that the GDDR6 bus is particularly smaller than what was needed for GDDR5 for Tahiti/Tonga/Scorpio @ ~360mm^2.
 
Status
Not open for further replies.
Back
Top