Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Better then this one? ;)

xenon.gif


Crazy that it leaked 18 months before release.

It terms of sheer volume of data the github leak is the most egregious. We leaked 3 consoles + 2 AMD GPUs.

If draw parallels to previous gens, once we got detailed CPU and GPU info they're pretty much the same outside of clocks. This leak is the most detailed leak ever! Memory is the one area we've seen late upgrades to, and that's because it is possible to do so.
 
The memory optio
mm... there's probably some consideration needed for the tracing & memory bus in supporting higher frequencies. :p So it depends on whether they went overboard on the mainboard.
Doubtful they'll need to discard any existing product.
 
There is no luck involved in these decisions. It takes years of planing and development.

If they are clocked at 2ghz it's because there something we don't know about what AMD have been developping for 2020, or about 7nm process improvements, or about cooling techniques, etc...

Being skeptical is not the same thing as claiming it's false. Claiming 2ghz is possible, without knowing how they did it, is worse.

What surprises me the most is that Sony have been extremely conservative in clocking for ps4 and pro, and all of a sudden they would go nuclear?
 
Better then this one? ;)

xenon.gif


Crazy that it leaked 18 months before release.

Damn, mostly leaks are like this yes. Not 'journalists' who saw a devkit :)

If draw parallels to previous gens, once we got detailed CPU and GPU info they're pretty much the same outside of clocks. This leak is the most detailed leak ever! Memory is the one area we've seen late upgrades to, and that's because it is possible to do so.

Yeah i think we got it, DF thought it aswell. Also, close to production starts.
 
Heat associated with rumored clocks is problem NOW, but perhaps wasn't meant to be when it was designed. That is the point.

What if Navi was meant to be clocked like Turing cards and you could deliver 36CUs on 2.0GHz in ~200W box?

What if Samsung couldn't deliver 8GB GDDR5 in 2013 and only did it a year later? You would probably say "Choice to go with GDDR5 instead of safe DDR3 memory was illogical".
You're the person trying to draw parallel between PS4/XBO and it's a weak argument because again the examples are not the same...

Again the argument here has nothing to do with console wars, it's a heat/efficiency issue.

If there's information available to suggest AMD is able to clock these chips that high, let's see it. In the meantime it's perfectly reasonable to be asking questions and putting an asterisk around conclusions which are not presently supported by what we know about consoles and heat.
 
Better then this one? ;)

xenon.gif


Crazy that it leaked 18 months before release.

haha. I remember discussing that with a fellow engineer (at Motorola semiconductor from the PowerPC actually) and he was so adamant that it was fake because no one in their right mind would build a 3 core CPU. The whole thing was total nonsense.


With regards to the 2GHz clock, I think it’s reasonable to expect architectural and process improvements in the RDNA GPU in the 16-18 months since the RX5700 series is released and the consoles ship.

There were major improvements in power efficiency between the release of the RX580 and the X1X. Both have similar performance but just the RX580 board (techpowerup tests) can exceed to total system power of the X1X.

I think PS5 will come in around ~200W like the original PS3 and will quickly transition to new node with a year or so. This won’t be a savings big enough for a slim, just cost reduction on the cooling.

for the XSX, it I think it will be in the 200-250W range.
 
I agree. Contrary to what I thought earlier, I don't think Xsx is a 300+w machine. I think it's 250w max.

I think PS5 is 175-200W.

300W is reserved for the Xsx2 so that Phil can say they redefined consoles again. :LOL:
 
From July :

https://www.anandtech.com/show/14687/tsmc-announces-performanceenhanced-7nm-5nm-process-technologies

TSMC has quietly introduced a performance-enhanced version of its 7 nm DUV (N7) and 5 nm EUV (N5) manufacturing process. The company’s N7P and N5P technologies are designed for customers that need to make then 7 nm designs run faster, or consume slightly lower amount of power.

TSMC’s N7P uses the same design rules as the company’s N7, but features front-end-of-line (FEOL) and middle-end-of-line (MOL) optimizations that enable to either boost performance by 7% at the same power, or lower power consumption by 10% at the same clocks. The process technology is already available to TSMC customers, the contract maker of chips revealed at the 2019 VLSI Symposium in Japan, yet the company does not seem to advertise it broadly.

I am sure there is a reason Oberon is running at 2.0GHz in that AMD repo.
 
Please Gaming Gods, lets the leak gates open and provide some concrete information like you all did with vgleaks in 2013.
Just food for thought...

Die sizes :

PS2 - 519mm2 ↓
PS3 - 493mm2 ↓
PS4 - 348mm2 ↓
Pro - 320mm2 ↓
PS5 - ???mm2 ?

Clocks :

PS5 - ???GHz ?
Pro - 911MHz ↑
PS4 - 800MHz ↑
PS3 - 500MHz ↑
PS2 - 147MHz ↑

...as we go further down the manufacturing node, there is clear trend of chip sizes getting smaller and smaller, while frequencies are getting higher and higher.

This is no doubt result of much higher expenses for chip design and higher cost per mm2 of chip.

nano3.png


When people ask themselves why would Sony go for narrower and faster design (beside BC consideration - which are paramount to next gen success and ability to transition PS4 players to PS5) - here is your answer.

Sony could afford huge die sizes for launch consoles back in the PS early days. Sony could quickly transition to newer nodes that offered obvious cost savings. That reality changed in the midst of the PS3 developed and Sony ended up with a huge chip that couldnt be cost reduced as before.

Frequency boosting is the simplest way to increase performance. But the hardest to readily increase due to thermals. Transistor counts of gpu have grown at a very faster pace than frequency rates. It’s that way for a reason.


From July :

https://www.anandtech.com/show/14687/tsmc-announces-performanceenhanced-7nm-5nm-process-technologies



I am sure there is a reason Oberon is running at 2.0GHz in that AMD repo.

7% in performance boost doesn’t allow a base clock of 2.0 GHz.

2.0 GHz would require more than just a jump to enhanced node.
 
Last edited:
7% in performance boost doesn’t allow a base clock of 2.0 GHz.

2.0 GHz would require more than just a jump to enhanced node.
This is just a quick way to get 7% more performance for same TDP or 10% less consumption for same clocks.

Of course, we are completely ignoring any improvement they will be making with chips coming out by the end of 2020 while we compare them to ones that came out half a year ago on a new node. Along with N7P, even 10% improvement in TDP over this years Navi 10 will be sufficient.
 
Please Gaming Gods, lets the leak gates open and provide some concrete information like you all did with vgleaks in 2013.


Sony could afford huge die sizes for launch consoles back in the PS early days. Sony could quickly transition to newer nodes that offered obvious cost savings. That reality changed in the midst of the PS3 developed and Sony ended up with a huge chip that couldnt be cost reduced as before.

Frequency boosting is the simplest way to increase performance. But the hardest to readily increase due to thermals. Transistor counts of gpu have grown at a very faster pace than frequency rates. It’s that way for a reason.




7% in performance boost doesn’t allow a base clock of 2.0 GHz.

2.0 GHz would require more than just a jump to enhanced node.
And it's not just thermals, the internal power delivery must allow that amount of current. That's how overclockers "cheat" by raising the voltage very high to compensate for the losses, not just the switching slew rate of the transistors.

Here's a wild guess: If they planned from the start to waste more metal layer area for fatter power delivery maybe it's possible they moved the efficiency knee of the architecture further up?
 
This is just a quick way to get 7% more performance for same TDP or 10% less consumption for same clocks.

Of course, we are completely ignoring any improvement they will be making with chips coming out by the end of 2020 while we compare them to ones that came out half a year ago on a new node. Along with N7P, even 10% improvement in TDP over this years Navi 10 will be sufficient.

That year and a half is not as large as you think. All the OEM PC contract wins that AMD garner don’t require aligning OEM product launches and providing a bus load of parts for products at the start. A part of that year and a half difference is being spent QC/QA on the console across all its parts and having millions of consoles available from the get go.

And yes we have seen a rather large jump in frequency between the Polaris 20 and the Polaris 30. But the highest binned Polaris 20 has a TDP of 180W while the Polaris 30 was a 225W part. You get the same level of jump in frequency when you move from a 180W Navi 10 and a 235W Navi 10.

The jump in frequency comes from an enhanced node and probably optimizations that allow better yields at a higher TDP.

What you are expecting is an enhanced node and optimizations that lead to a rather hefty jump in frequency with relatively little impact on TDP.

I can see that type of a jump if we were talking about more mature arch moving to a new node. Where AMD is using an appreciable shrink in transistors and a working knowledge of an arch that spans generations. But RDNA is still relatively new and N7 to N7P isn’t a huge jump.
 
Last edited:
None of this explains how all actually verified industry insiders are claiming both consoles are within splitting distance of each other and both at double digit TFLOPs, where the 9.7 TFLOPs GPU that is at least 24% slower than the competition clearly doesn't fall into.
Devs said the xbone and ps4 were basically the same speed before launch too. This matches whats happening now.
 
What's to say they haven't? Available to who in volume? Also they are still 11months from launch.

I assume the production starts in May/June so they have to order some parts around now. The point here is that it's not a spontaneous decision you can pull out of your ass just before production itself starts which the previous poster assumed.
 
Oh, okay. Well that's something else really. Camera as an interface is dead. For streaming, sure, but that's not why it was invented and a far simpler solution could be used there, I'm sure. It certainly wasn't designed for streaming but for camera vision and object tracking. A streaming camera may be a thing.

Streaming and Playroom were part of the tentpole social features demonstrated at PS4's reveal in 2013. There really isn't a lot of complicated tech in the PS4 camera; the stereoscopic camera setup is good for lighting and depth detection which aids background masking. Everything else - motion detection, facial recognition, voice recognition - is done in software.
 
If 56Cus @1700mhz is true, then i don't think about 300mhz more at 36 is so strange. Narrow/fast vs wide/slow(er).

Probably Sony will be forced to go with two o tre consoles binning better both for mem chips & apu chips...

399 $ 40x2.0 ghz, Ps5 (16 gb fast ram)
250 $ 36x1.8, Ps4pro2 (8 gb fast ram)
150 $ 36x0.9, Ps4pro diskless (8 gb slow ram)

I know it looks strange... please dont ban me by talking again of this ps4pro2

XBXS should be around 500 or 550 dollars
 
haha. I remember discussing that with a fellow engineer (at Motorola semiconductor from the PowerPC actually) and he was so adamant that it was fake because no one in their right mind would build a 3 core CPU. The whole thing was total nonsense.


With regards to the 2GHz clock, I think it’s reasonable to expect architectural and process improvements in the RDNA GPU in the 16-18 months since the RX5700 series is released and the consoles ship.

There were major improvements in power efficiency between the release of the RX580 and the X1X. Both have similar performance but just the RX580 board (techpowerup tests) can exceed to total system power of the X1X.

I think PS5 will come in around ~200W like the original PS3 and will quickly transition to new node with a year or so. This won’t be a savings big enough for a slim, just cost reduction on the cooling.

for the XSX, it I think it will be in the 200-250W range.

IMO, if it is indeed clocked at 2 GHz, I'm thinking it'll likely be in the 225-275 watt range. And that is assuming they get some relatively significant gains WRT to power efficiency. If power efficiency gains are relatively minor then it could be higher even than that.

That's basically going to keep it beyond the knee of the power efficiency curve.

Regards,
SB
 
And it's not just thermals, the internal power delivery must allow that amount of current. That's how overclockers "cheat" by raising the voltage very high to compensate for the losses, not just the switching slew rate of the transistors.

Here's a wild guess: If they planned from the start to waste more metal layer area for fatter power delivery maybe it's possible they moved the efficiency knee of the architecture further up?
It’s possible to extrend the frequeny envelope like that, it is part of what makes a ”HP” process optimized for high power!
But these optimizations typically come at the expense of density. And for something like GPUs that parallelize nicely, and consoles, that benefit from low power draw and cheap cooling, the natural fit would be to do the opposite - increase density, go wider, drop power draw, where the cost in going wider is offset by being able to increase the density of the chip.

We’ll see.
 
Status
Not open for further replies.
Back
Top