PS4 Pro Speculation (PS4K NEO Kaio-Ken-Kutaragi-Kaz Neo-san)

Status
Not open for further replies.
Perhaps all they've done is drop it to the 14nm process node, enable to two disabled CUs on the GPU and upclocked the hell out of the whole thing?

Sure, but where's the fun in that? Let's make consoles weird again! Except maybe this time do it without making developer's lives hell for no good reason.
 
Perhaps all they've done is drop it to the 14nm process node, enable to two disabled CUs on the GPU and upclocked the hell out of the whole thing?
Except that would not give them HDMI2.0, HDR, HDCP2+ and hardware-based H265 encoding/decoding. They HAVE to add that, so they cannot simply shrink PS4 APU... and since they have to tweak APU, why not add Puma+ and more CUs/Polaris module... :)
 
I still consider a discrete GPU plausible, though, as unlikely as it is.
It's more within the realm of possibilities, yes, although you'd need to double up the RAM in the machine to make it work, so a lot of GDDR chips sucking board space... Also, more complicated cooling setup, but Sony has lots of experience with making coolers for two main chips after PS3, so not really a problem as such.

Maybe something along the lines of Prophecy and Diehard's suggestions combined. If APU is dropped down to 14nm you get a lot of extra frequency headroom opened up, from what has been said in other threads. Full 20CU GPU clocked at 1400-ish MHz would be well over 50% paper performance increase from that change alone. Modify design to bring updated HDMI, codecs, whatnot...done and done.

Or do full rework; puma cores, zen cores (no wai, but we're speculating here), latest GCN IP...but that's virtually a new console. It'd be a big job. Architecturally it'd be pretty much a PS5, not a PS4k. Not that I'd complain! :D But I'd be surprised.
 
Full 20CU GPU clocked at 1400-ish MHz would be well over 50% paper performance increase from that change alone.

There will always be switched off CUs, they need that for better yields. 14nm is a new process, most probably with lower yields than very mature 28nm process.
 
why not add Puma+ and more CUs/Polaris module... :)

IIRC 'Puma+' cores are just AMD's name for Jaguar cores on GF's 28nm process. There are no architectural improvements.
AMD's design team responsible for the Bobcat/Jaguar cores no longer exists (personnel from it having moved to Samsung, Qualcomm and AMD's Zen team) which may complicate porting the core to 14nm FinFET.

My guess is still with a move to Zen cores. At least Jaguar is a pretty standard 2-way OoO core without any glass-jaws or stand-out features, generally any code it runs Zen should be able to run faster.
 
There will always be switched off CUs, they need that for better yields.
Maybe. 14nm isn't THAT new though, and Apple's buying tens of millions of chips on it every quarter with no fused-off CPUs or GPU cores. Their chips are smaller than a theoretical 14nm "PS4k" APU would be, but not by orders of magnitude. (~100sqmm vs ~175sqmm, give or take.)

...And Apple does order A LOT of chips. So yields aren't terrible, that's for sure.
 
Perhaps all they've done is drop it to the 14nm process node, enable to two disabled CUs on the GPU and upclocked the hell out of the whole thing?

20 CUs at 1.2GHz would do 3 TFLOPs. Aside from that, they could use 7 Gbps GDDR5 for 224GB/s and maybe also upclock the CPU cores towards 2.4GHz (to maintain GPU-CPU clocks ratio).
 
20 CUs at 1.2GHz would do 3 TFLOPs. Aside from that, they could use 7 Gbps GDDR5 for 224GB/s and maybe also upclock the CPU cores towards 2.4GHz (to maintain GPU-CPU clocks ratio).

It's almost painful knowing that the silicon already in my PS4 could potentially put out as much as 3 TFLOPs.
 
man, 4K seems like a super hard number to achieve - 3840*2160 is not a low resolution at all.
God damn. Minimal requirements look around Geforce 980 performance to keep that resolution at 30fps.
Tall order: 980 is about 4.61 TFLOPS

Perhaps all they've done is drop it to the 14nm process node, enable to two disabled CUs on the GPU and upclocked the hell out of the whole thing?
20 CUs at 1.2GHz would do 3 TFLOPs. Aside from that, they could use 7 Gbps GDDR5 for 224GB/s and maybe also upclock the CPU cores towards 2.4GHz (to maintain GPU-CPU clocks ratio).
That's quite an over clock
 
Last edited:
man, 4K seems like a super hard number to achieve - 3840*2160 is not a low resolution at all.
God damn. Minimal requirements look around Geforce 980 performance to keep that resolution at 30fps.
Tall order: 980 is about 4.61 TFLOPS
Yeah, I find it hard to believe that the PS4k would be able to play the same games at 4x the resolution of the standard PS4. It'd certainly need to be a lot more than 50% faster...

Simple maths would suggest that it needs to be 400% more powerful, but I'm not quite sure if that's strictly true in practice.
 
To be on the safe side, you'd probably have to shoot for more than 4x and also need to increase other resources (interconnects, bandwidth, IO, OS efficiency, capacity). It's not like we can get perfect scaling, and 4x wanders well outside of the range of hand-waving away non-improved portions of the system as being trivial.
 
The Fans are all exited about the hypothetical extra FLOPS but nobody really cares (or should) about 4k by self, at the standard viewing distance it won't make much of a difference and that if it is rendered close to the native resolution. Actually games might be the stuff the benefit the less from the increased resolution, browsing is a little tricky through my alienware connected to a FullHD tv, extra pixels would help it is blurry (really blurry not snobing about how a phone or a tablet with a screen better than any of your other devices is blurry compared to the length of some relatives epen or the depth of the relatives evagina /pocket). Now leaving the FLOPS aside as they are here pretty much just to push the same shit out, a PS2.33333333333333 is what we should want :)
 
Last edited:
Correction to my earlier post on AMD's MOESI scheme: the home agents do snoop, but given their placement as arbiters over their share of memory they don't have a reason to snoop each other.
 
It could also potentially deliver 6 Tflops with perfect binning, nitrogen cooling and quadrupling of clocks. :)
It's actually this reason why I don't believe it would be native 4K gaming. If we are being realistic, like truly realistic, technically achievable, but business wise, with so few owning 4K sets, it's hard to believe that they would release a device specifically for that market so early.

I get putting in support to upscale to 4K or something of the sort, but outside of that hard to really imagine a console achieving parity performance with a r290 card without busting the power envelope or being excessively expensive.
 
With these latest reprojection techniques they don't need to render natively at 4K. Like it has been said before. Even a middle ground between 1080p and 4K (3k? No idea) would do, if then it's pumped up to full res like it's don't today with 1080p. Won't look as good but definitely doable.

If QB can render at 720p and look like that, there's hope.
 
With these latest reprojection techniques they don't need to render natively at 4K. Like it has been said before. Even a middle ground between 1080p and 4K (3k? No idea) would do, if then it's pumped up to full res like it's don't today with 1080p. Won't look as good but definitely doable.

If QB can render at 720p and look like that, there's hope.
Yes, strong engineering and solutions could effectively scale well in this scenario. But not applicable to every situation
 
Status
Not open for further replies.
Back
Top