PS4 Pro Speculation (PS4K NEO Kaio-Ken-Kutaragi-Kaz Neo-san)

Status
Not open for further replies.
Wait, a physical card which has 2 GPU cores on it, is fine.
But the whole "2 eyes so 2 GPU's is super efficient" needs to have a terminator sent back in time to kill that stupid argument's parents. And if that fails, kill the people who wrote it down on the internet

Quoting from Anandtech's article on AMD's LiquidVR tech, regarding Affinity Multi-GPU:

Anantech said:
Moving on, we have AMD’s compelling content goal, which is backed by their Affinity Multi-GPU technology. Short and to the point, Affinity Multi-GPU allows for each eye in a VR headset to be rendered in parallel by a GPU, as opposed to taking the traditional PC route of alternate frame rendering (AFR), which has the GPUs alternate on frames and in the process can introduce quite a bit of lag. Though multi-GPU setups are not absolutely necessary for VR, the performance requirements for high quality VR combined with the simplicity of this solution make it a easy way to improve performance (reduce latency) just by adding in a second GPU.

At a lower level, Affinity Multi-GPU also implements some rendering pipeline optimizations to get rid of some of the CPU overhead that would come from dispatching two jobs to render two frames. With each eye being nearly identical, it’s possible to cut down on some of this work by dispatching a single job and then using masking to hide from each eye what it can’t actually see.
 
Alright well here's my vague hopes/predictions:

14nm apu (4-8 zen cpu cores + 27-36 polaris compute units at 950MHz)
12GB GDDR5 memory/8GB GDDR5X memory (or possibly even same 8GB GDDR5 if this system is just intedended brute force gpu upgrade (higher res/fps) )
1TB hybrid hard drive

Cost: $399-$499

Release March 31st 2017
Just ahead of April first :)
 
I thought cost per transistor went up at 14nm due to finfet

Different things have been said over recent years, but this slide from last year (posted by Pixel a few pages back) paints a picture of a 20 nm node that scaled ... okay ... from 28 nm in terms of area, but cost more per transistor. For 14 nm (which is GF rather than TSMC), area scaling is only very slight from 20 nm, but cost per transistor is again dropping though nothing like as significantly as it has in the past. At least power is dropping again thanks to Finfet.

Tallies with other comments I've read, that say that 28 -> 14 nm (~2 nodes) has given finally given the kind of area and power reductions you'd historically have associated with a single node shrink.

(I'm assuming the cost relates to time of introduction, as cost on the same node also drops over time, and cost around introduction would seem to be the most relevant things for AMD to be talking about last year, as they were developing Zen).


eezbRGE.jpg
 
It would not look good if they offered less than 500gigs. Games are getting larger and larger.


True dat. 500GB was plenty of space on X360, I honestly feel constrained by 500GB on XBO. And I'm not a voracious gamer by any stretch (other than a ton of Destiny).

It's really only enough for 5-6 games installed at one time. Which in practice is livable, but far from ideal.
 
If the latest GAF PS4K rumor says Q1 2017 rather than the previous "before PSVR" (although I'm dubious about that post in general)...I dunno I'm getting that delay-riffic feeling that always strikes video games. More likely than not it will slip further. Q3 2017?

Since Q4 2018 will be 5 years from last gen, maybe it would be best to just wait a bit and start a new gen (for both Sony and MS)?

Or maybe there will be no new gen's. Just incremental upgrades forever I guess.
 
If the latest GAF PS4K rumor says Q1 2017 rather than the previous "before PSVR" (although I'm dubious about that post in general)...I dunno I'm getting that delay-riffic feeling that always strikes video games. More likely than not it will slip further. Q3 2017?
Which rumours are you talking about? The two dates I've seen were a full announcement before October and actual release in Q1 2017. These are obviously different things.
 
Last edited by a moderator:
Which rumours are you talking about? The two dates I've seen were a full announcement before October and actual realise in Q1 2017. These are obviously different things.

Yeah I think it was stated that it was supposed to be ANNOUNCED before PSVR release. And supposedly released either Holiday 2016 or Q1 2017.
 
Since Q4 2018 will be 5 years from last gen, maybe it would be best to just wait a bit and start a new gen (for both Sony and MS)?

Or maybe there will be no new gen's. Just incremental upgrades forever I guess.
See this post. New gens will not make sense once BC becomes an ecosystem feature.
 
What if the 2nd GPU/ALUs is PowerVR or Mali? This make more sense to me. The DPU is upgraded for 4K rendering.
 
This'd mean a better turnaround of hardware IMO. Games would just need a clear minimum version number.

This would also imply a a greater need to profit from the hardware in the early years of sale. Potentially meaning less hardware for your money than console gamers are traditionally used to.
 
See this post. New gens will not make sense once BC becomes an ecosystem feature.

Only if gamer's expectation is to play games with the exact same graphics (read: lighting effects, texture res, shadow/lighting quality etc) as current gen going forward, but with marginal increases in framerate and resolution... i.e. nobody expects that at all (see the constant whining about a lack of "next-gen gameplay" in games early this gen).

Console generations are about more than just increases in framerate and resolution.

If consoles become like mobile HW/PCs, with developers having to support a large swathe of HW configurations in order to make money, then the console gaming industry will crash... HARD!!! This is because shoddy old 4-8 year old tech will always be the developer baseline; the lowest common denominator that gets targeted. So there'll be in fact even less of a reason for consumers to upgrade with more frequent HW revisions because the games will always be being held back by legacy HW.

I can't understand how anyone would want this...
 
What if the 2nd GPU/ALUs is PowerVR or Mali? This make more sense to me. The DPU is upgraded for 4K rendering.
Adding a second GPU of a completely different architecture? That's bonkers. It'd mean two rendering paths and a whole lot of work for devs to target PS4K, and zero benefit for PS4 games running on the system that'd be limited to the basic Liverpool GPU.
 
Only if gamer's expectation is to play games with the exact same graphics (read: lighting effects, texture res, shadow/lighting quality etc) as current gen going forward, but with marginal increases in framerate and resolution... i.e. nobody expects that at all (see the constant whining about a lack of "next-gen" games early this gen).
At launch, yes. But with the new hardware releasing sooner, it's not such a problem. When iteration three comes out, some games will already be targeting iteration 2 as a minimum at 30 fps. These games will then run on I3 in all their 60 fps glory. Come I4, some games will be targeting I3 as a minimum as they are too demanding to run on I2, and these will run at 60 fps on I4. So the cycle continues, with gamers having the option to transition when they feel their console no longer offers the experience they want. Developers will have the choice to target the widest audience and lowest level hardware with the most competition, or focus on the latest hardware as a flagship title and be a bigger fish in a smaller pond. Just like PC devs choosing to target integrated graphics as a minimum or a more powerful baseline and exclude all those laptops to sell to the core PC gamer.

Console generations are about more than just increases in framerate and resolution.
IMO everyone needs to stop thinking like this. Just because consoles had to work a certain way in the past, doesn't mean that's the best way nor the way people want. Console generations was literally the only choice, with a clean slate design and developer hell during the transition and weak software for the first 12-18 months. Consoler iterations via hardware (so no fat API sucking up resources and still an option for a decent lean OS and user friendly experience) means improved gaming and soft transitions and an extensive, ever-growing library. The only people who need worry are PC gamers who'd lose a fair bit of their unique advantages. :p
 
At launch, yes. But with the new hardware releasing sooner, it's not such a problem. When iteration three comes out, some games will already be targeting iteration 2 as a minimum at 30 fps. These games will then run on I3 in all their 60 fps glory. Come I4, some games will be targeting I3 as a minimum as they are too demanding to run on I2, and these will run at 60 fps on I4. So the cycle continues, with gamers having the option to transition when they feel their console no longer offers the experience they want. Developers will have the choice to target the widest audience and lowest level hardware with the most competition, or focus on the latest hardware as a flagship title and be a bigger fish in a smaller pond. Just like PC devs choosing to target integrated graphics as a minimum or a more powerful baseline and exclude all those laptops to sell to the core PC gamer.

Well, then it really depends on how short the duration actually is between each HW iteration, as well as the performance jump between each one. If its 4-5 years per iteration, with a 4-8 x increase in performance then sure I can see the scenario you painted panning out a bit more, as it won't be too far removed from what we currently have.

But if its only 2-3 years between each iteration, with a measly 2 x jump in performance between each one, then devs will absolutely need to target more than just the "current iteration - 1" if they will want to make any money. Since in 2 -3 years the newer HW will not have been able to garner enough of an installed base to even be considered by the devs, and this will be made worse by the fact that the meagre performance increase will offer even less incentive for gamers to upgrade.

IMO everyone needs to stop thinking like this. Just because consoles had to work a certain way in the past, doesn't mean that's the best way nor the way people want. Console generations was literally the only choice, with a clean slate design and developer hell during the transition and weak software for the first 12-18 months. Consoler iterations via hardware (so no fat API sucking up resources and still an option for a decent lean OS and user friendly experience) means improved gaming and soft transitions and an extensive, ever-growing library. The only people who need worry are PC gamers who'd lose a fair bit of their unique advantages. :p

Console generations existed as a business model that was an alternative to PC gaming. There was always a choice, and I'd argue that the mere fact of the console industry's success shows the benefit of that model over the situation we have on PC.

I think you're approaching this as someone with a very different mindset to most typical console gamers. Most console gamers can't even perceive differences in framerate and resolution unless they are told about it. So now you think those same consumers would still be willing to upgrade their HW every 2-4 years for an upgrade that amounts to merely an imperceptible (according to them) increase in performance? Why should they?

In my eyes, console generations have never been about offering consumers better computing technology merely because it's available. They've been more about facilitating the need to progress software technology used in the creation of games, that will allow for the realisation of better, more complex and more believable gameworlds. The technological progression of HW is slowing down, granted that is an inescapable fact. But I don't see more frequent console HW updates as a solution to that. In fact I think more frequent updates will only exacerbate the problem in consumer's eyes, as the perceived slowdown in software technology (e.g. improving rendering techniques etc) will appear disproportionately slower than the actual slowdown in HW technology, due to the business realities of software development. Essentially, by doing so you're suddenly asking gamers to upgrade more frequently, for much less of a hw improvement, to see that hw improvement only get taken advantage of in real games 2-4 years after you made the purchase... That's not the most appealing value proposition to offer gamers, is it?
 
I could imagine a situation where this is happening from both Sony and Microsoft because both discovered it would be cheaper to commision new 14nm APU tech from AMD than to have their current architectures shrunk...
Indeed that why I'm no biting that much in a significant upgrade at all. It will be a while before AMD can do that and the resulting system is likely to have nothing to do with the PS4 and the Xb1 especially on the CPU side of performances.
Sony has a system that should be relatively easy to map to newer IPs (still 28nm) with minimal software pain. In the process they could reduce costs and deliver a device that is lot more solid as a media hub.
When they designed the PS' they went further than what they seem to be the right balanced on the CPU side, they invested some extra silicon in extra ACEs (I don't expect that much of an impact yet it complexify the design), etc; they may have been hoping for the current IP to make it to 20nm and have at least one shrink of the system in front them, now they are left with redesign to provide newer functionality and possibly reduce costs (something trickier to do for MSFT more exotic hardware). It would not be the first time some hardware see a downgrade of some form (the Vita screen is an example).
The PS4 SOC is quite big at 348mm2 with lots of not that cheap memory, Sony may not want to follow into MSFT loss leading approach with XB1 or may want to make some money on the hardware.
Leave the new media playing capability and comfort of use (extra CPu power) aside, the gain could be sound think moving from Pitcairn to Bonaire (212mm2 to 160mm2 **), performance wise that gives an idea, this is for power. Either way there is hardware.fr review that also get both card really close.
I do believe that it can be make an even fairer comparison through the access to newer IP, AMD has not upgrading its (gaming) entry level for so long that we could get possibly surprised by overall how well a light weight GCN 1.2 GPU performs, clocks and by its matching power and thermal charactics.

** For the ref a quarter of Fiji is 150mm2
 
Last edited:
Status
Not open for further replies.
Back
Top