PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
IAlso don't these jaguar cores have AVX support, so they're pretty wide 8 floats, might be easier to just use AVX for vector heavy calcs. You'll enjoy more coherent memory bandwidth and you have a lot more L2 space vs the GPU.

True but you still only have 100 GFLOPS to play with on the Jaguars vs up to 18x that on the GPU so the incentive is pretty clear!
 
I think Cerny mentioned during one of the early interviews that Sony primarily expect middleware to use GPU compute.

I seem to remember some middleware from several years ago which promised realistic damage simulation. The video actually had small R2D2s being thrown a different surfaces and wood breaking and splintering realistically. Seeing as this never transpired for the current-gen, I can image the GPGPU compute being perfect for something like this.

I personally would love to see a shooter implement this technology, where wood and objects will fracture and break according to the projection speed and surface of object fired.

(not the unrealistic BF destruction)
 
^^^
DMM was used for SW The Force Unleashed and the result were decent but yeah maybe now they can achieve better results.
 
Last edited by a moderator:
Its interesting and the results are good but its not really comparable to Microsofts solution which only attributes 2x the XBONE's CPU to cloud power (so 200GFLOP) a long with more RAM and HDD.

That is not the way they describe it, it was a rough example of the provisions they have made thus far. It wouldn't be a 'cloud' system if the capacity is fixed (both on the server end and use case). Obviously as more load adds up more capacity will be added and the potential application will depend on the complexity of the algorithm and cloud capacity.
 
I still fail to see how GPGPU is going to work with the coherent system memory. It looks like any GPU access to cached system pages needs to go through the onion bus, that's only 10GB/s.
The bus is 10 GB/s in each direction, so a decent mix of accesses from both sides can max out what the CPU side can pull in on its own.
If the traffic isn't bidrectional, then the question is raised about whether it is necessary to keep the all of the working set in cached memory.
If there were a probe filter, things would be better-behaved, but that might not be in place for Orbis.

Also don't these jaguar cores have AVX support, so they're pretty wide 8 floats, might be easier to just use AVX for vector heavy calcs. You'll enjoy more coherent memory bandwidth and you have a lot more L2 space vs the GPU.
The CPU section pulls in 20 GB/s. There are things that run well thanks to the straightline speed of the cores, or they can fit well in the L2, but memory bandwidth is of the same order as the Onion bus.
 
I’ve just been looking at the Anandtech article on Jaguar and the TDP details are quite ambiguous; it shows that 4 active cores @ 1ghz on Kabini consume 4 watts of power, so presumably 8 watts powering two CUs (8 CPUs). Unfortunately, there’s no indication as to power requirements for the processor @ 1.6ghz, only that 1.6 is the ‘sweet-spot’. If we assume it’s 66% for a 400mhz increase (from 1.6ghz to 2ghz), it’d be fairly safe to assume the increase is approximately 16.5% per 100mhz. A 600mhz increase from 1ghz to 1.6ghz would maybe be around a 100% increase in consumption, so the total for the 8core CPU would be around 16 watts @ 1.6ghz (probably a bit lower). An increase to 2ghz, should take it to about 26.5 watts in total (16 + 66%).

I think it’s been speculated recently that the total TDP for the whole system is 100 watts, so the GPU would presumably take somewhere between 50 and 85 watts to power, as all of the other components should be taken into consideration too. If we take the approximately 75 watts as the total for the GPU (leaving 10 watts for the rest of the system) and increase that by the same percentage difference as the CPU (75watts / 8 = 9.375 + 16.5% = 14.5 watts for every 100mhz increase), the additional 200mhz from 800mhz to 1ghz should be about a total of 104 watts (75 + 14.5 + 14.5) for the GPU overclock.

If the system were to be overclocked to 1ghz GPU and 2ghz, the TDP could be around the 140 watts (including the 10 watts for the rest-of-system)?
I’m almost certainly talking out of my ass though, so anyone (everyone?) that’s brighter than me should correct my misinterpretation of the numbers. I would imagine that the GDDR5 memory must take a significant amount to power, but I cannot speculate on that number.

Your default text is unreadable on dark backgrounds, such as is common when using tapatalk on portable devices.
 
Given the recent 53 MHz upclock for XB1's GPU, is the same on the cards for PS4, or if not, why not?

Shifty, is this a stealth troll attempt? :mrgreen:

It really seems MS is scrambling to try and get any good news out there after the PR beatings they've been taking. Sony seems content to sit back and stick to their gameplan.

There is some precedence for Sony changing clockspeeds down the road, though I believe it was only on PSPs.

I'd be inclined to doubt a reactionary response from the perceived leader in the clubhouse. But you never know.
 
Given the recent 53 MHz upclock for XB1's GPU, is the same on the cards for PS4, or if not, why not?

A similar upclock would probably have a greater thermal increase, since the GPU is bigger. It would probably still be a small increase.
It's not a huge change, and given the common architecture I doubt that the extra notch isn't there in the multiplier table.
If Orbis silicon came back with its sweet spot slightly higher, it seems like a tiny thing to do. Granted, the upside itself is pretty limited. For Orbis, it provides performance increases where it was already significantly more ahead than seven percent and does nothing for areas it isn't, with the possible slight exception of the graphics command processor.

However, going by VGleaks, there was at least one point where Orbis was downgraded in terms of memory clock and the buses in the uncore. The latter are not as dependent on the vagaries of memory manufacturers, and may be more indicative of how much slack there is.
 
They could add 30 RPM to the fan and call it a day, and this assumes they didn't set aside a margin of less than 10% (maybe less than 5%) just in case. An upclock shouldn't be significant in lower-power modes, and when it does matter the fan should be spun up anyway.
 
PS4 GPU was 50% ahead in raw performance, i see no need to raise the clock if that's going to make the system less reliable. MS move is not going to change much, Sony should be OK as long as they maintain that 100$ difference in price.
 
If we assume this was a "free" upclock, could Sony use that better-than-expected result to lower the voltage a bit instead, and stay at 800? Say a 5% voltage drop would be nice.
 
PS4 GPU was 50% ahead in raw performance, i see no need to raise the clock if that's going to make the system less reliable. MS move is not going to change much, Sony should be OK as long as they maintain that 100$ difference in price.

This is the sort of scenario where things are better than expected, so it might take more effort to forego the benefits.
Sony probably wouldn't force it if the circumstances don't permit, but we don't know their circumstances.
 
Last edited by a moderator:
PS4 GPU was 50% ahead in raw performance, i see no need to raise the clock if that's going to make the system less reliable.
That's the scale I'd like to see. I guess there's a curve of speed/temp/reliability and someone's picked...5% failure in 3 years, say, and 53 MHz would push that up to...5.5% (?) and someone somewhere tapped some numbers on a calculator and decided that'd cost $n million dollars and wasn't worth it. But given MS facing the same sort of graph and numbers, they have decided it's worth the investment.

What sort of power increase are we talking about here? It'll be more than a linear relationship to the clockspeed increase, so a 7% increase in clock will result in, say, 10% increase in heat. Is there enough AMD information out there to identify a realistic heat increase?
 
What sort of power increase are we talking about here? It'll be more than a linear relationship to the clockspeed increase, so a 7% increase in clock will result in, say, 10% increase in heat. Is there enough AMD information out there to identify a realistic heat increase?
If they get away without raising the voltage, it is less. 7% more clockspeed may result in <=5% increase in heat output. The leakage is unaffected by the clock.
 
Status
Not open for further replies.
Back
Top