I'm sceptical. How would you cool a SoC drawing upwards of maybe 100W through several layers of DRAM? (Oh, and not also cook the DRAM. ) And what if your SoC is larger than the DRAM stack, how do you reasonably cool the edges...?
When were the PS4 dev kits first sent to third party developers ?
I would be more impressed by the vague leak if we knew wether it's a PC with an amd gpu... or an nvidia gpu.This time, the alpha kits are probably going to be even closer to final architecture.
Looking around, he's one of the hundred "insiders" who make vague guesses and end up being right about half the time. I'll wait for Leadbetter to corroborate with real sources.Who is this guy?
Says on his profileWho is this guy?
And that would indicate 2019.
I don't want a Vita successor, I want full PS4 on the go. "Portable PS4" would be an amazing product for 2019 [while PS4 is still kicking ass and getting great games], but battery will be an issue. Even if 7nm is amazingly efficient they may creep up to ~20W in gaming [+they would need to power the screen], but even that is A LOT.Maybe it's a switch competitor.
Seeing how PS4 is selling, 2019 is too early.
Sure everyone else is free to adopt it but it's more than likely Sony is the first one to do it given their current position, unless we're optimistic enough to see a new Xbox launching two years after the One X.Why is this only an option for Sony?
The cited patents are...irrelevant? And not actually cited anywhere? Still, at least we know Cerny is involved at Sony.BC patent from Cerny, nov 2015...
http://patents.com/us-9892024.html
It seems to be much more than just a boost mode testing for the Pro. Seems to be for testing power management clocking, having cpu cores, or gpu cores, or various caches, or memory bus, running at different clocks, both higher and lower. Resources taken away, apps running in parallel, reduced caches and buffers, reducing the execution rate of specific instructions (again both cpu and gpu). Adding latency, changing memory operation priorities... Why reduce the size of L1 cache to test BC?
Hard to read because of patent-speak
Hey Guys i read alot on here but posted only once in 3years . So time for the 2.Post..
About that ps5 devkits Rumor.: what is your opinion about best Hardware specs possible if this(rumor) should be true ?
I was hoping for a bigger Step this time tflops wise. Around 12-15tflop maybe. But in this scenario it seems unlikely..
So what you're saying is a 7nm version of Liverpool would consume some 30% less power than the current 16nm PS4 Slim is consuming with disc spinning optical and mass storage drives, and desktop GDDR5?Very difficult to imagine a portable ps4 even on 7nm, I'm thinking between 40W and 50W.
When were the PS4 dev kits first sent to third party developers ?
You have to consider what's out today. $499 for a Xbox One X and likely taking a loss due to ram prices today.Sure everyone else is free to adopt it but it's more than likely Sony is the first one to do it given their current position, unless we're optimistic enough to see a new Xbox launching two years after the One X.
So what you're saying is a 7nm version of Liverpool would consume some 30% less power than the current 16nm PS4 Slim is consuming with disc spinning optical and mass storage drives, and desktop GDDR5?
Talk about pessimism...
The Smach-Z gets Xbone-like performance already, using a 14LLP SoC that consumes 15W.
Why is it so hard to consider a 7nm SoC from Sony could get PS4 performance, considering both GF and TSMC are predicting around 60% downsize in power consumption at ISO performance?
Only missing link so far is memory bandwidth, but between 256bit LPDDR4 / 192bit LPDDR5, HBM or even Wide I/O I'm sure Sony could find a way like they did in the past.
Any guesses on what the dev kits have, using currently available chips?
I'll shoot first:
- AM4 platform
- Ryzen 5 2400G
- Vega 56 8GB HBM2 at 1450MHz core / 950MHz memory.
- 16-24GB DDR4 2400MHz
Final solution will obviously be a full SoC, but Ryzen 2400G's iGPU would be used to test GPGPU functionality with very low latency on 11 CUs, whereas the Vega 56 should provide the overall final rendering performance (~10 TFLOPs).
They could alternatively not use the 2400G and go with e.g. a Ryzen 1700 with SMT disabled and locked at 3GHz instead, in case Sony is fully determined to get easy BC up and running from the start with little to no emulation.