PS4 Pro Speculation (PS4K NEO Kaio-Ken-Kutaragi-Kaz Neo-san)

Status
Not open for further replies.
Yep that'll give them easy BC for sure. My last hope now is freesync. If the game runs at higher res on PS4K but with freesync enabled by default on all cross games, it could be interesting and a real framerate improvement from PS4.

If the leaks are true obviously.

Freesync would be nice. That would probably be the system seller for me personally.
 
Perhaps they are still deciding between 4, 6 or 8 Zen cores.
This is the most logical.Maybe 4 zen cores are enough to replicate PS4 8 jaguar cores, depending on the clock speed, but they hesitate wether to give the cpu more juice.If they ask devs they will vote for cpu upgrade first.Depends also on XBOX 1.5 cpu, this could be perfectly the same situation that forced Sony to go with 8 GB GDDR5 at last minute with PS4.
 
Double the GPU would mean 14/16 nm, new CPU would mean Zen, meaning GF 14 nm. Don't think this will be ready by E3, perhaps not even by PSVR.

Cost per transistor is going to be fairly similar for mature 28nm and early 14nm I'd have thought. So that explains a cost bump. GDDR5X would mean they could stick with 256-bit bus, probably 8 or 16 GB.

Will likely coexist with PS4 for a couple of years, this isn't going to be a straight replacement.
I thought cost per transistor went up at 14nm due to finfet
 
EDIT: ugh, SLI according to leaker Zoetis. Not sounding good for 60fps gaming. One CPU, 2 GPUs, that's definitely for an upres only.
Perhaps it's just that guy's way to convey its performance ballpark (2x) so not in the literal SLI I guess... What's on-chip SLI in SoC anyway?

I still think and hope this Raven Ridge APU with Zen is most likely in Q1 2017, PS4 missed Steamroller due to the launch timings, this time they don't have to rush it.
https://forum.beyond3d.com/threads/ps4k-speculation.57686/page-4#post-1902465

The GPU should be on par with GeForce GTX 970 which can match an Oculus-ready PC.
https://forum.beyond3d.com/threads/...ible-console-spawn.51432/page-21#post-1900300

The 4K thing shouldn't be taken literally either, PS3 was a 1080p console after all.
The HDR output support is a nice addition, it's already HDR as-is internally.
 
Last edited:
Probably the biggest misconception in the history of VR internet topics.
Dual GPU is not better for anything. it's the most inefficient way of computing in recent memory.

Dual GPU is worse for everything, unless you are selling the GPU's
It depends, a less efficient solution is better than an impossible one.

You can't make a 700mm2 GPU, and you can't make a 768bit GDDR5 interface either.

But you can make two 350mm2 GPUs with 384bit GDDR5 each.
 
Alright well here's my vague hopes/predictions:

14nm apu (4-8 zen cpu cores + 27-36 polaris compute units at 950MHz)
12GB GDDR5 memory/8GB GDDR5X memory (or possibly even same 8GB GDDR5 if this system is just intedended brute force gpu upgrade (higher res/fps) )
1TB hybrid hard drive

Cost: $399-$499

Release March 31st 2017
 
A 50% increase in clocks would have to be on a smaller node and would also likely require a proportional GDDR5 BW increase.

They might as well upgrade to GCN1.2 at that point (delta compression), though I'm not sure how much changed on the low-level uarch though. Not a 100% replacement for pure BW, but I suppose it helps a bit.
 
Probably the biggest misconception in the history of VR internet topics.
Dual GPU is not better for anything. it's the most inefficient way of computing in recent memory.

Dual GPU is worse for everything, unless you are selling the GPU's

Not true. The main problem with multi GPU is that developers have not been able to code for it explicitly instead relying on the driver to make it work. This would not be the case in a console and will soon not be the case in PCs either as developers start leveraging DX12's explicit multi-adapter. The other major problem is the performance penalty when a GPU has to access non-local memory, something that can be mitigated with a super-fast interconnect between the memory controllers on each chip.

Also, AMD are producing a dual-Fiji card for the specific purpose of being a tool for VR content creators. You would expect, then, that this card must be pretty good for VR if there is a belief that people making VR applications would choose that solution over the existing options.
 
It true, I agree. This could mean the end of generations as we know them. In 3-4 years, another doubling of GPU and new CPU.
If we're only doubling PGU power every 4 years, then conosle generations as we know them are over regardless if there's a stopgap progression or not. That's mean 4x GPU increase per 6-8 year generation, which isn't a generational advance (typically ~10x). Perhaps it's this slow down of progression that's encouraging (in oart at least) a progressive platform model? Get consumers used to the idea and upgrading as they want. This'd mean a better turnaround of hardware IMO. Games would just need a clear minimum version number.
 
Wait, a physical card which has 2 GPU cores on it, is fine.
But the whole "2 eyes so 2 GPU's is super efficient" needs to have a terminator sent back in time to kill that stupid argument's parents. And if that fails, kill the people who wrote it down on the internet
 
They could do a similar thing to what Nintendo did with the Wii, overclock both CPU and GPU and maybe adding some cores to the GPU in order to reach 3.6Tflops for the GPU. How much do you think the PS4 APU could be overclocked using 14nm tech?

EDIT: ugh, SLI according to leaker Zoetis. Not sounding good for 60fps gaming. One CPU, 2 GPUs, that's definitely for an upres only.


I think it's Starsha the GNB that's being upgraded while the main GPU stay the same.
 
Status
Not open for further replies.
Back
Top