PS4 Pro Speculation (PS4K NEO Kaio-Ken-Kutaragi-Kaz Neo-san)

Status
Not open for further replies.
Add a bus to the memory controller to let it talk to the memory controller in the other APU.
I suspect actually implementing that in hardware would be a little more complicated. At minimum, you'd have to expand bus snooping across both chips, and that would mean changes in the cores themselves, which would need verification and so on. Maybe more work than it's worth, considering how slow Jaguar is? :p
 
14/16nm is 20nm; it had a new marketing name slapped on it, shrunk only just barely, and got finfet ("3D") transistors added on top. It wouldn't surprise me if most or all of the 20nm manufacturing facilities have been, or will be converted to 14/16nm as soon as manufacturing contracts run out.

Interesting. What is your guess on what "10nm" FinFET nodes might actually be in all practical terms -- 20nm again, 16nm, 14nm, or a true 10nm ?
 
Wouldn't two APUs sharing a single memory pool on a single bus end up bandwidth starved when they are both trying to access data from main memory at the same time?

On PS4 contention between the GPU and CPU already causes BW for each to drop disproportionately. I can't think how that will work for two entire APUs.

Not a single bus. You'd have RAM connected to the memory controller on each APU. If one APU has to access memory that was physically connected to the other it would do so via a bus connecting it's memory controller to that of the other chip. To the system software, though, it's all just one pool of memory. Again, old tech that AMD have had since 2001. I'm not aware of an existing bus implementation of this nature that could handle the bandwidth of the PS4's GDDR5, though, so one would have to be custom made for this application.

To be clear, I don't consider any of this likely. Unlike some, though, I consider it plausible. And if it were the solution, I think it's a lot more interesting of one than the types of solutions most expect.
 
I suspect actually implementing that in hardware would be a little more complicated. At minimum, you'd have to expand bus snooping across both chips, and that would mean changes in the cores themselves, which would need verification and so on. Maybe more work than it's worth, considering how slow Jaguar is? :p

Maybe. Maintaining cache coherency across 4 separate processors is certainly a challenge. Not an unsolvable one, though.
 
Dual APUs is preposterous. It makes far more sense to use the existing APU combined with a discreet GPU of equivalent power to the PS4's current GPU for SLI-style cooperation. Or, if you're making a whole new APU anyway, just make that as powerful as you want. Using more than one APU is just wasteful redundancy.
 
Interesting. What is your guess on what "10nm" FinFET nodes might actually be in all practical terms -- 20nm again, 16nm, 14nm, or a true 10nm ?
I'm not a silicon process engineer myself (although we do have at least one of those on this board :)), but given the challenges involved with 10nm - and the delays! - we can be pretty sure it's a genuine shrink. The problems are substantial enough to cause Intel to abandon its tick-tock model - Intel of all companies! :)

If they expect a full year of delays, how long will it take TSMC and Sumsang/Glofo to catch up? Consider we haven't even seen new chips arrive yet! We could be stuck at 16/14 for another four years, potentially.. :runaway:
 
Interesting. What is your guess on what "10nm" FinFET nodes might actually be in all practical terms -- 20nm again, 16nm, 14nm, or a true 10nm ?
For some time now and for the forseeable future, these nm node names are indicative of major lithographic changes rather than the linear measure of anything in particular. That is true of Intel as well, even though they barely operate as a contract foundry, and thus don't really need to do this for competitive reasons.
This is an example of a short accessible article on the subject.
The upcoming 10nm node from TSMC will see updates on all major aspects, and would traditionally be seen as a full node shrink, whereas their 7nm node will seemingly leave certain aspects unchanged while focussing on transitioning other critical ones.
It bears mentioning that what defines a lithographic process is way more complex than just two or three numbers that constitutes the depth of most forum comparisons between processes. Also, on any given process, specific implementation details tailor the properties of the final device, for instance giving priority to density or speed.
 
Damn. A console smaller than PS4 with 3.6TFlops+ would be enticing...

I'd rather have same size but quieter!

I imagine PS4K can be a more controlled, modest launch. The PS4 is selling nicely right now, Sony needs full cooperation and education with retailers to market a new pricier SKU in a way they want it to be with PSVR, etc. etc. So they may try new things, sharing more info with more people.

But I think that neogaf guy is not a retailer, retailers don't have to know its spec.

I think this also...again much like iPhone where 2 versions co-exist at any one time giving buyers an entry level and a premium level, what would be nice is if by buying a game you get 'both versions'. I suspect the $499 being spoken about is the target.
 
How loud do your PS4s get? I mean I notice when mine turns up the fan every so often but with the game/video sounds I can hardly ever hear it.
 
Mine's a launch unit and I don't hear the fan on it. Only the disk drive when it spins up to install a game to the HDD. I often think many of these fan complaints are people mistaking the drive spinning noise for the fan.
 
open it up and clean out the dust. Did this a few weeks ago for a friend and its back to running very quiet.

yeah, I hoover the vents frequently which seems to help but I know there will be dust inside....I've heard tho that after a while it goes back to square one. I just bought one of the quieter fans, I figure if I'm going to open up my old launch unit and clean it out I may as well go the whole hog and ensure it's got the better fan and fresh paste.
 
Mine can get really loud on some games. But now my PS4 is located in another room, so actually I can barely hear it.
 
Not a single bus. You'd have RAM connected to the memory controller on each APU. If one APU has to access memory that was physically connected to the other it would do so via a bus connecting it's memory controller to that of the other chip.
AMD's existing MOESI schemes rely on the memory controller to be the final arbiter of snooping requests, so they themselves do not snoop.
http://www.realworldtech.com/qpi-evolved/3/

Putting a bus between memory controllers would be on the wrong side of the cache/memory divide unless the idea is that one APU's controllers are slaved to the other up to the point that not even the cores local to the slave controllers can directly access them without going through the other chip. There's currently no infrastructure to communicate to another SoC, and even it were added it would need to differ from AMD's existing architecture that uses the controllers as their home agent.
Slaving one APU's controllers to the other would wind up creating a link between controllers and require an HT interconnect to transfer snoops.

An alternate formulation of AMD's memory architecture that moves the complication of creating memory controllers with the intelligence to snoop each is a large enough endeavor on its own, and it's not clear how much the cache subsystem that underpins Jaguar would need to change. The non-trivial aspect of this is indicated by how long AMD has gone without touching this distributed system.

It would seem less disruptive to create some kind of hybrid Xfire solution, similar to AMD's laptop APUs having a matching discrete GPU. It would then require explicit management by software and would add further division in the unified memory model. Compute that relies on Onion+ might not be portable or useful, although this would explain why legacy games could not use the extra processing power or bandwidth without a patch. It would be difficult to not get some kind of unwitting benefit from a stronger GPU or CPU block.

I am curious where the chatter is about a PS4 slim model. Unless Sony thinks the PS4's success has sopped up so much demand, the slim model would be part of the life cycle where more general buyers that balk at early adopter pricing or the bulkier and hotter initial implementations buy in. Not putting that out there is taking the foot off gas in that broader market.
Also unclear, given the speculated significant graphical upgrade, is why increasing performance allows for a smaller console when the power benefit has been eaten up--unless the thing stays as loud or louder. I suppose if a PS4K is using a hybrid Xfire solution, it could be a PS4 Slim + GPU rather than designing two different APUs or forgetting about slimming the console down.
 
@3dilettante

Thanks for the insight. All of this started as a way to reconcile *all* of the leaks as opposed to arbitrarily choosing which ones I wanted to believe, just to see if it was possible. This system using "SLI" was one of those leaks (Nvidia branding > AMD branding, apparently). The 2X APU was specifically a reaction to the alleged uncertainty regarding CPU performance. As Mchuj properly questioned, how can this still be in flux this "late"? So, I thought, "What if they are contemplating whether to add a 2nd full APU instead of just a discrete GPU? That decision might be able to be made fairly late." Well, that logic seems pretty flawed now. Since that was the whole premise behind suspecting a dual APU system, I think I'm ready to toss that concept now.

I still consider a discrete GPU plausible, though, as unlikely as it is.
 
Perhaps all they've done is drop it to the 14nm process node, enable to two disabled CUs on the GPU and upclocked the hell out of the whole thing?
 
Status
Not open for further replies.
Back
Top