Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Interesting, I will have to check when I get back home. There is one test where Python script pulled Arden test names, but results were for Oberon BC1. Test name was related to RT, for example :

Ray Tracing Ray-Box intersections GLX ray_tracing_box

but result shown was 216GB/s - BW of PS4 Pro in BC1 mode.

This one :

Screenshot_20200302-115325__01.jpg

Its regr_round 228 (fits in with file you specified - Arden_COM_BC1_228_0617)
 
Last edited:
That quote is made up, he didn't said that:
https://www.pcgamesn.com/amd/navi-4k-graphics-card
I also think it's suspicious.
After googling for him i only found he's also the guy behind this 'disrupt 4K' claim, but nothing else.
I doubt AMD would confirm RDNA2 + RT for PS5 at such smaller sites, while Lisa tried to avoid saying anything too specific at larger audiences just before.
They also spelled his name incorrectly at the site about RT.
 
Interesting, I will have to check when I get back home. There is one test where Python script pulled Arden test names, but results were for Oberon BC1. Test name was related to RT, for example :

Ray Tracing Ray-Box intersections GLX ray_tracing_box

but result shown was 216GB/s - BW of PS4 Pro in BC1 mode.

This one :

View attachment 3627

Its regr_round 228 (fits in with file you specified - Arden_COM_BC1_228_0617)

What does this mean exactly? A careful re-evaluation of the GitHub file needs to be done since labelling might not fit the data as whoever wrote the scripts messed up? If yes, this opens a massive can of worms on using this leak as gospel. What other errors can there be? Including errors that we cannot anticipate since we lack the full context, as I said before.
 
What does this mean exactly? A careful re-evaluation of the GitHub file needs to be done since labelling might not fit the data as whoever wrote the scripts messed up? If yes, this opens a massive can of worms on using this leak as gospel. What other errors can there be? Including errors that we cannot anticipate since we lack the full context, as I said before.
Nothing much, basically junior ASIC team at AMD ran bunch of Phyton scripts and in that one they pulled Arden native Test names for Oberon BC1 test, but result column was in fact BC1 data and had nothing to do with testname.

In short, you had test names :

ray_tracing_box
ray_tracing_box_bvh64
ray_tracing_triangle
ray_tracing_mixed
ray_tracing_partial_wave
ray_tracing_large_workload
sqc_kcache_hit_utcl0hit_r
sqc_kcache_hit_utcl1hit_r
sqc_kcache_hit_utcl2hit_r
sqc_kcache_hit_ptemiss_pdehit_r
sqc_kcache_hit_atchit_r
sqc_kcache_hit_atcmiss_r
vrs_1x1_512
vrs_1x2_512
vrs_2x1_512
vrs_2x2_512
vrs_1x2_512_2x
vrs_2x1_512_2x
vrs_2x2_512_2x
vrs_2x2_512_4x

But results were related to PS4 Pro bandwidth, peak texture fillrate etc.

I mean, there are bunch of excel tables there, most of them are repeated regression tests and there are few mistakes such as that one, but at least we got pretty much confirmed XSX specs out of it.
 
Time for a conspiracy theory :) why was the Xbox series GPU part ready and the Sony one wasn't during the GitHub leak or are the dates different regarding Oberon and Anaconda during the GitHub test results?
Different tests.
As I have noted many times earlier; the tests are real; but full suite tests can be extremely long and costly. So if you are happy with the result you don’t keep doing full suite tests. You only want to test the portions you want to test.
 
Nothing much, basically junior ASIC team at AMD ran bunch of Phyton scripts and in that one they pulled Arden native Test names for Oberon BC1 test, but result column was in fact BC1 data and had nothing to do with testname.

In short, you had test names :

ray_tracing_box
ray_tracing_box_bvh64
ray_tracing_triangle
ray_tracing_mixed
ray_tracing_partial_wave
ray_tracing_large_workload
sqc_kcache_hit_utcl0hit_r
sqc_kcache_hit_utcl1hit_r
sqc_kcache_hit_utcl2hit_r
sqc_kcache_hit_ptemiss_pdehit_r
sqc_kcache_hit_atchit_r
sqc_kcache_hit_atcmiss_r
vrs_1x1_512
vrs_1x2_512
vrs_2x1_512
vrs_2x2_512
vrs_1x2_512_2x
vrs_2x1_512_2x
vrs_2x2_512_2x
vrs_2x2_512_4x

But results were related to PS4 Pro bandwidth, peak texture fillrate etc.

I mean, there are bunch of excel tables there, most of them are repeated regression tests and there are few mistakes such as that one, but at least we got pretty much confirmed XSX specs out of it.

How can we be so sure that those are the only mistakes? One mistake would have already been enough to throw the whole thing into doubt. Why did it take so long for this to come to light after weeks of discussion?
 
How can we be so sure that those are the only mistakes? One mistake would have already been enough to throw the whole thing into doubt. Why did it take so long for this to come to light after weeks of discussion?
Well, for one you can check yourself and see how data and excel tables were created and then you see its simple Python script mistake as they ran 100s of them apparently. There are dozens of BC1 regression tests (all done throughout April and May) so its pretty clear one with Arden testname in it (for RT and VRS) is mistake.

Results are not changed btw, its the same. Arden vrs_1x1_512 testname will show maximum pixel fillrate for example, indicating result and testname accompanied make no sense.
 
How can we be so sure that those are the only mistakes? One mistake would have already been enough to throw the whole thing into doubt. Why did it take so long for this to come to light after weeks of discussion?

The repo has the all the scripts aka code so we literally can be sure those are the only mistakes.

I also pointed this out in the first few days as I discovered the typo pretty quick.

https://forum.beyond3d.com/posts/2094160/
 
O'dium on GAF thinks it should likely be 11.5 tflops for PS5 now. My guess is the latest number he got aware off was 11.5 tflops and the gave all the range he historically every got (from 10.5 to 11.5). There could be another explanation of his previous reluctance to acknowledge the 11.5 figure as a serious possibility: he is actually more of a Xbox fan. :yep2:

But the biggest clue about that ~11.5 tflops number is ironically in the 9.2 tflops bible*. With that abnormal ~552GB/s theoretical number** for the GDDR6 bandwidth (the pretty much same exact number found in the Flute benchmark).

Sony and MS have the same data from AMD RDNA GPUS; and ram this time is allegedly the most expensive part of the console. So they won't give more that what is just needed for the ram (size and bandwith), particularly Sony who are clearly not used to give ample amount of memory bandwidth in their last console (obviously Pro is BW starved, and PS4 had just what was needed for the job, not a GB/s more).

So well, I think Oberon B0 BW is clearly designed for a ~12 tflops GPU (say from 11 to 13), not for a 8-9 tflops GPU.


* github leak
** Ariel theoretical / measured | 448 / 431.3344 | Oberon B0 measured: 531.2 | Oberon B0 theoretical should be about: 551.72 GB/s
 
But we do know oberon is 256bit, don't we? I'm a bit confused with the million code names cross referenced with komachi etc...

If that's the case, I seriously doubt they're using 18gbps parts on a mass production console. It's unobtainium even for nvidia highest priced cards. It's not even about the price, that speed bin doesn't have enough yield to even have a part number yet, let alone multi-sourcing.

I think 14gbps is still a safe bet, and 16gbps is possible if the timing is right for samsung's next memory node. Usually the speed steps up a notch at every new node.

The high gpu clock is possible based on clever cooling and sacrificing yield, but memory bins cannot be pushed in such a way.
 
But we do know oberon is 256bit, don't we? I'm a bit confused with the million code names cross referenced with komachi etc...

If that's the case, I seriously doubt they're using 18gbps parts on a mass production console. It's unobtainium even for nvidia highest priced cards. It's not even about the price, that speed bin doesn't have enough yield to even have a part number yet, let alone multi-sourcing.

I think 14gbps is still a safe bet, and 16gbps is possible if the timing is right for samsung's next memory node. Usually the speed steps up a notch at every new node.

The high gpu clock is possible based on clever cooling and sacrificing yield, but memory bins cannot be pushed in such a way.
It's not 18gbps. It's about 17.25 gbps for a very limited quantity (devkits). 16gbps are officially available now (mass production for 1GB chips, limited quantity for 2GB chips). I'd say a very limited quantity of 17.25 gbps 1GB chips shouldn't be impossible.

https://www.samsung.com/semiconductor/dram/gddr6/

Let them worry about mass producing those in the very near future. Their problem, not ours. :LOL:
 
Maybe they overclocked the chips just for testing purposes to see the impact of bandwidth?

The speed bins are very much the same idea as processor binning - operational voltage & power consumption. It may not be impossible, but the question is the impact to yields and production volume.
 
But we do know oberon is 256bit, don't we? I'm a bit confused with the million code names cross referenced with komachi etc...

If that's the case, I seriously doubt they're using 18gbps parts on a mass production console. It's unobtainium even for nvidia highest priced cards. It's not even about the price, that speed bin doesn't have enough yield to even have a part number yet, let alone multi-sourcing.

I think 14gbps is still a safe bet, and 16gbps is possible if the timing is right for samsung's next memory node. Usually the speed steps up a notch at every new node.

The high gpu clock is possible based on clever cooling and sacrificing yield, but memory bins cannot be pushed in such a way.
We do know its 256 bit bus, yes.

As for speed of memory modules, lets just say PS4 used downclocked 6Gbps chips (5.5Gbps) in 2013, when top of the range TI GPUs from Nvidia used 6Gbps. PS4Pro on the other hand used downclocked 7Gbps chip (6.8Gbps) in 2016, when binned chips were 7-8Gbps max.

I think this year all top range cards will be using 18Gbps chips and I am 99% sure PS5 will be using downclocked version of these, as 16Gbps (let alone 14Gbps) won't be nearly enough.

I have played around with BW per TF numbers today for last gens consoles vs PC GPU equivalent and what I found out is that consoles had 24-29% higher total BW in comparison. So I would expect PS5 to have around 520-530 if it is 9TF console.

Arden will have to have more then 560GB/s if it is 12TF console (more like 640GB/s min).

O'dium on GAF thinks it should likely be 11.5 tflops for PS5 now. My guess is the latest number he got aware off was 11.5 tflops and the gave all the range he historically every got (from 10.5 to 11.5). There could be another explanation of his previous reluctance to acknowledge the 11.5 figure as a serious possibility: he is actually more of a Xbox fan. :yep2:

But the biggest clue about that ~11.5 tflops number is ironically in the 9.2 tflops bible*. With that abnormal ~552GB/s theoretical number** for the GDDR6 bandwidth (the pretty much same exact number found in the Flute benchmark).

Sony and MS have the same data from AMD RDNA GPUS; and ram this time is allegedly the most expensive part of the console. So they won't give more that what is just needed for the ram (size and bandwith), particularly Sony who are clearly not used to give ample amount of memory bandwidth in their last console (obviously Pro is BW starved, and PS4 had just what was needed for the job, not a GB/s more).

So well, I think Oberon B0 BW is clearly designed for a ~12 tflops GPU (say from 11 to 13), not for a 8-9 tflops GPU.


* github leak
** Ariel theoretical / measured | 448 / 431.3344 | Oberon B0 measured: 531.2 | Oberon B0 theoretical should be about: 551.72 GB/s
You are making a mistake in calculations. There is no 552GB/s theoretical number, there is only 448GB/S theoretical (14Gbps chips) and 528-530GB/s (Flute/Oberon B0) and that would make it measurable 16.5Gbps.

Theoretical number depends on bus width * chip clock.
 
It's not 18gbps. It's about 17.25 gbps for a very limited quantity (devkits). 16gbps are officially available now (mass production for 1GB chips, limited quantity for 2GB chips). I'd say a very limited quantity of 17.25 gbps 1GB chips shouldn't be impossible.

https://www.samsung.com/semiconductor/dram/gddr6/

Let them worry about mass producing those in the very near future. Their problem, not ours. :LOL:
So they are putting them in the devkits for.....the fun of it? Or to make the 9.2 github leak real? And we're back to 9.2??? To punish Sony for not revealing? That I can at least agree with, now hand me a pitchfork.
 
We do know its 256 bit bus, yes.

As for speed of memory modules, lets just say PS4 used downclocked 6Gbps chips (5.5Gbps) in 2013, when top of the range TI GPUs from Nvidia used 6Gbps. PS4Pro on the other hand used downclocked 7Gbps chip (6.8Gbps) in 2016, when binned chips were 7-8Gbps max.

I think this year all top range cards will be using 18Gbps chips and I am 99% sure PS5 will be using downclocked version of these, as 16Gbps (let alone 14Gbps) won't be nearly enough.

I have played around with BW per TF numbers today for last gens consoles vs PC GPU equivalent and what I found out is that consoles had 24-29% higher total BW in comparison. So I would expect PS5 to have around 520-530 if it is 9TF console.

Arden will have to have more then 560GB/s if it is 12TF console (more like 640GB/s min).


You are making a mistake in calculations. There is no 552GB/s theoretical number, there is only 448GB/S theoretical (14Gbps chips) and 528-530GB/s (Flute/Oberon B0) and that would make it measurable 16.5Gbps.

Theoretical number depends on bus width * chip clock.
552GB/s is a theoretical number I calculated using the same ratio of Ariel theoretical / measured BW. There is no mistake and it's a reasonnable calculation assuming they are doing the same tests.

16.5 gbps chips ? Now that's a big mistake as vendors only use theoretical numbers with those chips, never measured BW ! And the speed can only be linked to the theoretical number anyways. 560GB/s for Arden is in the github leak and was already predicted well before just using the Scarlett CGI reveal.

But you are totally making that 640GB/s number up. Where does it come from ?

Is this Groundhog day today...again ? Seems like we already had this discussion before...
 
552GB/s is a theoretical number I calculated using the same ratio of Ariel theoretical / measured BW. There is no mistake and it's a reasonnable calculation assuming they are doing the same tests.

16.5 gbps chips ? Now that's a big mistake as vendors only use theoretical numbers with those chips, never measured BW ! And the speed can only be linked to the theoretical number anyways. 560GB/s for Arden is in the github leak and was already predicted well before just using the Scarlett CGI reveal.

But you are totally making that 640GB/s number up. Where does it come from ?

Is this Groundhog day today...again ? Seems like we already had this discussion before...
Perhaps English is not your native language (its not mine), but you have completely missed the point.
  1. Theoretical value is calculated based on bus width and memory speed
  2. In Github, theoretical value was 448GB/s duo to fact that bus width was 256 bit and memory speed was 14Gbps
  3. If you suddenly put 16Gbps chips on motherboard, with that 256 bit bus width, you will achieve 512GB/s of BW. If you put 18Gbps total BW will be 576GB/s
  4. 576GB/s is theoretically highest BW possible on 256 bit bus, as there are no chips faster then 18Gbps
  5. Anything below 18Gbps will result in lower total bandwidth (if bus width stays 256bit)
  6. Coincidentally, 528GB/s of BW points at 16.5Gbps chip
  7. Nothing stops Sony or MS from clocking their chips however they want (well, obviously upper limit is 18Gbps). Case in point : PS4 - 5.5Gbps / PS4Pro - 6.8Gbps
There is no reason for you to calculate made up "theoretical" numbers, we know them very well. Depending on how Sony clocks their memory chips, thats what total BW will be.

For Arden its the same, except Arden has 320bit bus width, therefore with 14Gbps chips Arden will have 560GB/s. I calculated console BW v PC GPU equivalent BW and found out that total BW of consoles are 24-29% higher then that of equivalent PC GPU. Therefore, I think if Arden is 12TF 560GB/s will not be sufficient and they will have to use 16Gbps chips at very least, which would result in 640GB/s.

For Oberon, we have measured data with faster chips then 14Gbps. For Arden we only have default theoretical - 14Gbps, therefore we will have to wait and see what they put in that box.
 
So OsirisBlack created an account on ResetEra. The mods noticed and told him to check PMs. The end result is that account is banned. The message to the posters in their speculation thread was:

"Let's not concern ourselves any further and get back to the fun speculation"
So does that mean they failed their verification process and everyone should place appropriate levels of credibility on everything he presented on GAF?
 
Status
Not open for further replies.
Back
Top