PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
Are they split in some way? I just think that past 14 CUs the CPU or something else in the system is starting to be a bottleneck and you get overall better performance/balance by dedicating a few CUs to help/boost the CPU.

I think we reeeeally need official specs. I get the feeling that people here and at GAF have been speculating for the past few weeks over a mixture of pure speculation and misinterpreted info. :D
 
Could it be those 4 have access to special cache(s) or other such hw that assists with compute functions?

Or could it be that its more like 4+14 instead and the 4 act like a mini-gpu for power efficiency?
 
Could it be those 4 have access to special cache(s) or other such hw that assists with compute functions?

Or could it be that its more like 4+14 instead and the 4 act like a mini-gpu for power efficiency?

How would that be more power efficient? There is nothing in that incredibly primitive block diagram that indicates the GPU is split in any way. But it also doesn't show the audio processor or anything off the south bridge, so that could mean nothing.
 
Are they split in some way? I just think that past 14 CUs the CPU or something else in the system is starting to be a bottleneck and you get overall better performance/balance by dedicating a few CUs to help/boost the CPU.

This is a reason for Kaz to tell us in 2015 that ps4 power is still untamed :)
Its questionnable design I think. Why you need 8core cpu at 1.6ghz and 4 reserved amd cu? Is it cheaper than good 4 dynamic core 3.2 ghz cpu from AMD FX series?
 
This 14 + 4 CU thing is very strange. Why not just leave it as 18 CU and leave it to the developers discretion on as to what to use them for?

I hope 4 CU's weren't purposely nerfed leaving developers no choice.
 
One random thing I can think of is the possibility that at startup, the system generates a its own shader(s) that are assigned to four CUs prior to other programs being able initialize their graphics routines.
This would take up instruction issue, load/store bandwidth, registers, and LDS space that can impact what else can run on those CUs.

What they're there for, and how that can be managed would require more details to narrow the possibilities.
 
I am disappointed not because of the TFlops, compute units, etc. but I hoped for at least some really special engineering. A downclocked off-shelf GPU, Jaguar cores and 4GB RAM is probably the easiest and cheapest way of designing a console. Is AMD really that far behind and simply can't show anything special to battle Intel/Nvidia?

Since Sony does not have a record of revolutionary new ways to alter the gaming industry I at least hope the price will be accordingly low and the cheap hardware is a benefit for the customers and not for Sonys pocket.
 
AMD looks to be in both Microsoft and Sony's consoles. If so, it didn't need to do more to win this particular fight, because neither Nvidia or Intel are there.

Whatever other special sauce is desired would be up to the money and time allotted by the customer, and an analysis of whether playing cute with the architecture is worth the effort.

(edit: I said "special sauce" without sufficient irony, I should be flogged.)
 
I don't think it's a matter of AMD being behind. It's a matter of MS and Sony not being willing to front huge money to pay for bespoke electronics. They paid for off-the-shelf plus mods, and that's what they got.
 
Those 4 CUs might have been optimized for vector physics compute work hence the "minor" boost when used for rendering.
 
I don't think it's a matter of AMD being behind. It's a matter of MS and Sony not being willing to front huge money to pay for bespoke electronics. They paid for off-the-shelf plus mods, and that's what they got.

I don't even see the "mods" or any other gaming relevant additions except stripping the GPU from its PC shackles and putting everything in an APU.
 
Maybe these are just "reserved" for a specific purpose like Sony's VR or motion controls which they (also) intend to be omnipresent and therefore must have a guaranteed pool of resources available at all times.
 
So, you'd have 1.433 Tflops for rendering and 0.410Tflops (410Gflops according to the update) for GPGPU.

That's what I think and it makes the comments from llerre and other people in the know more clear now, as they have been saying that it is a wash between both console, which I couldn't initially reconcile with given the difference in flops. Almost everything in both are similar including the audio, videos, zlib compressors and the cpu. Now it seems the gpu are closer than we thought given the fact that durango might also have dedicated compute units. The only difference is their memory architecture.
 
I would guess the most interesting things are possible with 'LibGCM' and low latency ops between CPU and GPU.
 
The designers could just be saying "don't count on the last 4 CUs to really be available all the time", or devs are getting the ability to tell their shaders which CU they'll be allocated to, or which ones to avoid.

The ability to set what CU to go to is something that has not come up as being an option for the PC.
 
Those 4 CUs might have been optimized for vector physics compute work hence the "minor" boost when used for rendering.
I'm wondering that. It's the only choice that makes sense to me - optimise the CUs for GPGPU work rather than graphics. Don't know what sort of optimisations that'd be though.
 
Status
Not open for further replies.
Back
Top