Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
How often was Xbox One fillrate limited, or limited by the gpu front-end? Serious question. I'm assuming it was usually alu limited or bandwidth limited.

Xbox One Fill rates vs 192/204 GB/s memory bandwidth
@RGBA8 = 853 Mhz * 16 ROPS * 4 bytes = 54 GB/s (ROP bound)
@RGBA16F = 853 Mhz * 16 ROPS * 8 bytes = 109 GB/s (ROP bound)
@RGBA32F = 853 Mhz * 16 ROPS * 16 bytes = 218 GB/s (~bandwidth bound)
 
For current gen launch we had 720p Vs 1080p

It was a large exaggeration of the difference, you can tell an image is 720p.

Now it might be 1800p vs full 4k especially given we have all the temporal tricks. There will not be easy shimmering aliasing etc that was obvious going into this gen.

It's a whole different comparison I think.
Not really, it was more along the lines of "900p vs 720p" and "1080p vs 900p" as far as I can remember?
 
So what makes it impossible that one may have already decided for a bigger loss at the same price?

Nothing. My only point is that they companies are targeting their specs based on their own finances up front, rather than targeting their specs as reactions to each other. I'm sure the price point is the driver of the majority of the decisions they make as they start development.
 
Even if PS5 is 9.2TF and XSX is 12TF by all accounts they will still be closer overall than Xbox One vs PS4 and PS4 Pro vs X1X were. In those cases you had a +40% differential in GPU power and significant differences in memory.

This time it looks like the max diff in GPU will be 30% and there will not be as big of relative difference in memory bandwidth. So when you put it in perspective it's not as big a deal as people are making it out to be.
 
Ariel/Gonzalo/Prospero - characters from Shakespear's Tempest (early hardware, dev kit)
  • Ariel - early chip
  • Gonzalo - platform that got benchmarked (containing Ariel)
  • Prosper - development kit
Oberon/Flute - characters from Shakespear's Midsummer's Night Dream (retail hardware)
  • Oberon - retail chip
  • Flute - platform that got benchmarked (containing Oberon)
To clarify this is giving a succession of three PCI ID values across two chip names and 4 or more revisions?

Any thoughts as to why such similar chips that are development versions of an SOC have so many IDs? There are product or mode reasons why a given chip might be associated with other IDs for discrete products.


This chip was ES (engineering sample) back in January and its code indicated console semi custom chip consisting of Zen2 and 13E9 iGPU running at 1.0GHz
Is this narrative assuming the github leak has relevant data? If there's an assumption of 1.0 GHz or 1.8 GHz for an Ariel chip, weren't there 2.0 GHz results for Ariel?



So let me get this straight.
The XBOne got 1600MHz CPU / 800MHz GPU clocks announced at a pre-E3 event in May 2013. That was 6 (six) months before launch.
Then in August of the same year, clearly during the later part of their production cycle, they announced revised clocks for the GPU with a 6.6% boost. This was 3 (three) months before release.
Then in September of the same year, they announced revised clocks for the CPU with a 9.4% boost. This was 2 (two) months before release.
By the time of the Xbox One's upclock, Kabini had a SKU that clocked at 2.0 GHz.

A constant 2.0 GHz or higher for an AMD GPU to my knowledge isn't a known outcome for RDNA. To be more like the Jaguar scenario, some AMD GPU would need to be announced in the coming months with much higher base clocks.
Alternately, Sony would need to adopt a non-console boost strategy for its GPU, and still require significant improvement to get into a clock range that is uncommon even assuming rapid throttling, with doubtful stability, and potentially involving unsafe voltages. Those measures weren't necessary for Kabini's clocking well past where Durango's Jaguar cores were clocked to.
 
Is this narrative assuming the github leak has relevant data? If there's an assumption of 1.0 GHz or 1.8 GHz for an Ariel chip, weren't there 2.0 GHz results for Ariel?
The 1.0ghz and then 1.8ghz results were tied to Gonzalo(ariel). 2.0Ghz was tied to oberon in the github leak. 1.0Ghz result tied to pcie id 13e9. 1.8ghz result tied to pcie id 13f8. Both pcie ids tied to Gonzalo. The next revision, 13f9 is tied to flute, ariel, and oberon.
 
Offcourse there would be differences but no-one outside console warriors and system wars would care.

The use of percentage doesn't tell the whole story. Three TFlops is a lot no matter which why you slice it but if they do have a weaker machine it becomes more trivial by only meaning higher resolution and things like that.

The weaker machine prohibits them from using the extra compute for any exotic feature.
 
Even if PS5 is 9.2TF and XSX is 12TF by all accounts they will still be closer overall than Xbox One vs PS4 and PS4 Pro vs X1X were. In those cases you had a +40% differential in GPU power and significant differences in memory.

This time it looks like the max diff in GPU will be 30% and there will not be as big of relative difference in memory bandwidth. So when you put it in perspective it's not as big a deal as people are making it out to be.
I think the right way to look at the relative differences is are the platforms capable of delivering 4k? At 60fps? Ray tracing perhaps to a lesser degree?

This generation xbox struggled early with 1080p which was the standard output everyone was expecting and measuring, of course consumers largely didn't care because in isolation you couldn't tell any difference, it mattered to a small segment of gamers and was mostly a gaming media topic because it's easy to write about and was unexpected.

To say it another way, if xbox had been capable of 1080p for most titles early in this current generation, the power differences would not be much of talking point for even the hardcore gamers.
 
The 1.0ghz and then 1.8ghz results were tied to Gonzalo(ariel). 2.0Ghz was tied to oberon in the github leak. 1.0Ghz result tied to pcie id 13e9. 1.8ghz result tied to pcie id 13f8. Both pcie ids tied to Gonzalo. The next revision, 13f9 is tied to flute, ariel, and oberon.
I thought I ran across igpu values listing Ariel stepping results in a column next to Oberon results, including under the 2.0 GHz native category. Values like the L0 bandwidth were highly consistent with both being at 2.0 GHz.

I'm unsure what the purpose in this scenario would be to assign multiple PCI IDs to a single chip, or what would be the benefit of your second claim that multiple chips share the same PCI ID.
That there are two chip names but no significant difference between them seems unusual as well.
 
All in all, a Base and Pro simultaneous launch is the best option, a 9.2tf base isn’t as pathetically weak as a 4tf Lockhart so next gen baseline is firm, then a 14tf Pro would rock the world for everything else. Why can’t Sony see this?
 
All in all, a Base and Pro simultaneous launch is the best option, a 9.2tf base isn’t as pathetically weak as a 4tf Lockhart so next gen baseline is firm, then a 14tf Pro would rock the world for everything else. Why can’t Sony see this?
I guess because 9.2TF version is still loads powerful. Why do people think 5700XT in console would target low end?

I thought I ran across igpu values listing Ariel stepping results in a column next to Oberon results, including under the 2.0 GHz native category. Values like the L0 bandwidth were highly consistent with both being at 2.0 GHz.

I'm unsure what the purpose in this scenario would be to assign multiple PCI IDs to a single chip, or what would be the benefit of your second claim that multiple chips share the same PCI ID.
That there are two chip names but no significant difference between them seems unusual as well.
Maybe...RT hardware in Oberon? Maybe they are hiding it? They cought alot of "heat" on internet and everyone was aware of custom Navi/Zen2 based SOC with codename Ariel back in Feb 2019.

Twere 3 PCI IDs associated with it - 13E9 (earliest), 13F8 and latest 13F9 (which is PCI ID for Oberon A0 from June)
 
AMD will have developped a hardware solution based on MS requirement for DXR, to be used on their own GPUs for PC. So it's logical MS would use AMD's implementation.

Microsoft develop APIs in close collaboration with hardware manufacturers. And for their part, hardware manufacturers will generally work, with varying degrees of effort, with API (DirectX, OpenGL, Vulkan etc) API standards bodies. You can't have the API define the workings of the hardware in a new field of technology, this is literally putting the cart before the horse.

There is zero chance that AMD have two distinct hardware solutions for RT, one for Microsoft and one for Sony unless Sony developed (or heavily customised) AMD's stock RT solution but I would find that very difficult to believe.
 
I go for denial. Oberon must be either:
1. For blade servers for PS4 games on PS Now, hence no RT. 2.0 GHz for stress test leading to Pro boost mode.
2. For PS5, but has more to it than what was leaked in Github, additional RT, disabled CU's.

Anyhow with my feet on the ground regardless of outcome, I don't think for a second that Sony has not full control over what they want and will deliver. Look at what they have done for the console industry since 1994. Yes very risky business with PS3, but they even managed to do quite well in that situation. Wish you all a nice weekend, cheers!:D
 
I thought I ran across igpu values listing Ariel stepping results in a column next to Oberon results, including under the 2.0 GHz native category. Values like the L0 bandwidth were highly consistent with both being at 2.0 GHz.

I'm unsure what the purpose in this scenario would be to assign multiple PCI IDs to a single chip, or what would be the benefit of your second claim that multiple chips share the same PCI ID.
That there are two chip names but no significant difference between them seems unusual as well.
Do you se feasible a dual gpu solution?.
 
I go for denial. Oberon must be either:
1. For blade servers for PS4 games on PS Now, hence no RT. 2.0 GHz for stress test leading to Pro boost mode.
2. For PS5, but has more to it than what was leaked in Github, additional RT, disabled CU's.

Anyhow with my feet on the ground regardless of outcome, I don't think for a second that Sony has not full control over what they want and will deliver. Look at what they have done for the console industry since 1994. Yes very risky business with PS3, but they even managed to do quite well in that situation. Wish you all a nice weekend, cheers!:D

Question about this, would it be cheaper and easier to port the PS4 Pro SoC to 7nm and shrink it then to use a newly designed SoC based on RDNA?.
 
Not really, it was more along the lines of "900p vs 720p" and "1080p vs 900p" as far as I can remember?

Cross platform
COD Ghosts 720p (PS4 1080p)
AC black flag 900p (PS4 1080p)
Watch Dogs 720p (PS4 1080p)


Exclusive
Dead rising 3 720p
Killer instinct 720p
Titanfall 792p
RYSE 900p
Quantum break 720p

There were quite a few 720p titles, especially exclusive ones. PS4 held firm on exclusives apart from the order but that was black bars.

My point being at lower resolution it's easier to tell the difference. Especially as 720p just looks bad and muddy. From 1440p up it looks generally ok, or at least not offensive to my eyes at least. From there I would find it hard to tell a difference without side by side.

At higher resolution it's very hard. Not quite the same but this video is 8k Qled Vs 4k OLED. Now such a gulf in pixels should be trivial when showing 8k content and the people know (eventually it's a slow watch) what they are looking for


Tldw split decision on a 400% difference in resolution. Side by side nose to the screen comparison.
I cannot see people noticing much difference once we get to 4k.

That said for that exact reason developers may target lower than native 4k and that would pull the weaker machine even lower and it would then be easier to see.

We shall see I guess, although I am not sure it will be as clear power wise, I think there is more to the tale to be revealed.
 
Last edited:
For current gen launch we had 720p Vs 1080p.
At worst. Most of the time it was comparable resolutions with different framerates and some visuals cut-backs on XB1. Couple with, as you say, the difference in resolution being relatively less as we approach diminishing returns on res, I doubt it'll matter. I doubt many folk look at Spider-Man on PS4P and lament the lack of true 4K. There'll be some, but not many.

In real terms, the difference in TFs will only matter to those comparing the numbers. Ranger's tech-illiterate pod-casters will look at the numbers and feel it's far worse, but if presented with gameplay video from the two rumoured systems, how likely is it they could pick out the weaker platform without doing side-by-side comparisons?
 
The use of percentage doesn't tell the whole story. Three TFlops is a lot no matter which why you slice it but if they do have a weaker machine it becomes more trivial by only meaning higher resolution and things like that.

The weaker machine prohibits them from using the extra compute for any exotic feature.

3TF sounds like a minor bump in the road, we have Lockhart also potentially and that could be upto an 8TF difference :runaway::runaway::runaway:

Or 10.6 TF as MS have said no exclusives for probably the next two years so we have base Xbox one in the mix.

https://arstechnica.com/gaming/2020/01/xbox-series-x-wont-have-first-party-exclusives-for-a-while/

This is somewhat strange if they are going for out and out power as they need to show it in-game.

I actually feel like a troll ATM which is totally not the intention. :-?:no:

Actual game reveals cannot come soon enough
 
Status
Not open for further replies.
Back
Top