PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
Having the secondary processor separate from the APU could make some things simpler, such as keeping it awake when the whole APU and RAM is completely off for power reasons.

The wording used to describe it doesn't give much of a physical hint.
One count against using the A5 in AMD's Trustzone for this is that the background processor in the PS4 is described as arbitrating network and disk transactions, as well as handling automatic updates.

That's much more than what AMD's Trustzone implementation is tasked with doing.

Why not?

Couldn't AMD trustzone implementation mean a second "secured" OS running on the PS4? Where, any security function would fall under trustzone OS such "as arbitrating network and disk transactions, as well as handling automatic updates"?
 
Last edited by a moderator:
I'm going by the description of AMD's implementation of coprocessor model, where there is an x86 side and secured ARM side of the software.
The A5 processor in that case would need the x86 CPUs as part of its function, while Sony has indicated quite a bit of autonomy for the custom processor.

The A5 is the bare-minimum core AMD could have used for TrustZone, so contention between networking and disk management and the secure function calls from multiple Jaguar cores sounds like a potential performance bottleneck.
 
One of the early rumors was Sony chose not to use TrustZone.

The A5 is probably autonomous in order to manage the Southbridge. If it's used as a security processor at the same time, it may have some beefed up math capability to support the tiny/small security kernel (like the locked down SPU in PS3). Sony will probably keep the unit secret for obvious reasons.
 
I'm going by the description of AMD's implementation of coprocessor model, where there is an x86 side and secured ARM side of the software.
The A5 processor in that case would need the x86 CPUs as part of its function, while Sony has indicated quite a bit of autonomy for the custom processor.

The A5 is the bare-minimum core AMD could have used for TrustZone, so contention between networking and disk management and the secure function calls from multiple Jaguar cores sounds like a potential performance bottleneck.

From what I am reading the A5 serves a processor that controls the security subsystem. TZ can incorporate any part of the SOC into the secured enviroment as all parts of the SOC has to exist in either of the two states (non secured and secured). All security functions are handled by the secured world which can include the jags and gpu.

Contention isn't a problem when the jags and gpu aren't active so couldn't background downloads by facilitated by the TZ processor when in a low power state?
 
Last edited by a moderator:
In low power state, the main CPU should be off. The A5 will take over. There should be a small and specialized kernel running on that unit. Besides background download and Southbridge management, it may also handle firmware update.

I suspect 3dilettante was referring to a more general and high level "OS".
 
Contention isn't a problem when the jags and gpu aren't active so couldn't background downloads by facilitated by the TZ processor when in a low power state?
My interpretation of the press surrounding the background processor is that it can continue its work even when the system is active, which opens the A5 up to the possibility of being called upon to load-balance network and disk traffic while being called by multiple CPU cores.

AMD hasn't advertised a chip with the ability to run with the system wholly off save for the A5 and probably some memory and uncore, but that is just a lack of historical evidence rather than a requirement.
A separate chip could allow for very low power consumption for a networked idle state, since historically APUs in that power range tend to idle at power levels higher than many embedded cores pull at load.
 
To minimize contention, perhaps they may moderate "secondary" activities by having the main processor approve/signal the secondary CPU for running "distracting" side jobs (e.g., short segment of background download). May be some sort of prioritized job scheme ?
 
I just read Tomshardware Kabini review and in the end it says this:

Lastly, it's a bit of a bummer that neither Temash nor Kabini incorporate AMD's heterogeneous unified memory access (hUMA). This is the feature that will allow the GPU and CPU to share system memory without copying it back and forth, eliminating a massive source of latency in today's APUs. This is where we expect the company's SoCs to stand apart from some of the other highly-integrated processors being designed. Unfortunately, we won't see hUMA in a shipping APU until Kaveri is released later this year.

http://www.tomshardware.com/reviews/kabini-a4-5000-review,3518-14.html

Is this true? Kabini doesn´t support hUMA?. Then, PS4 is not hUMA neither?.
 
Last edited by a moderator:
To minimize contention, perhaps they may moderate "secondary" activities by having the main processor approve/signal the secondary CPU for running "distracting" side jobs (e.g., short segment of background download). May be some sort of prioritized job scheme ?

I finally found the reason for thinking it was on the southbridge...

A translation on GAF of an interview with Cerny:
http://www.neogaf.com/forum/showthread.php?t=532077

Cerny: The second custom chip is essentially the Southbridge. However, this also has an embedded CPU. This will always be powered, and even when the PS4 is powered off, it is monitoring all IO systems. The embedded CPU and Southbridge manages download processes and all HDD access. Of course, even with the power off.

Ito: The second custom chip also takes into consideration environmental problems. For background downloading, if the main CPU needs to be started every time, energy consumption increases significantly, so we run this with the second chip. Particularly in Europe, there are strict energy consumption regulations, so handling consumption in this manner is also one of our goals.

Cerney: There’s also network bandwidth considerations. Background downloading allows for smooth downloading of large files even when bandwidth is limited.
 
I just read Tomshardware Kabini review and in the end it says this:

Lastly, it's a bit of a bummer that neither Temash nor Kabini incorporate AMD's heterogeneous unified memory access (hUMA). This is the feature that will allow the GPU and CPU to share system memory without copying it back and forth, eliminating a massive source of latency in today's APUs. This is where we expect the company's SoCs to stand apart from some of the other highly-integrated processors being designed. Unfortunately, we won't see hUMA in a shipping APU until Kaveri is released later this year.

http://www.tomshardware.com/reviews/kabini-a4-5000-review,3518-14.html

Is this true? Kabini doesn´t support hUMA?. Then, PS4 is not hUMA neither?.

We already know that the PS4 has hUMA functionality (one pool of memory, addressable by both CPU and GPU). We also know that the PS4 has direct lines from CPU to GPU. So why are you still assuming that PS4 = Kabini?
 
We already know that the PS4 has hUMA functionality (one pool of memory, addressable by both CPU and GPU). We also know that the PS4 has direct lines from CPU to GPU. So why are you still assuming that PS4 = Kabini?

Well, reading this:

http://techreport.com/news/24737/amd-sheds-light-on-kaveri-uniform-memory-architecture

I reach this conclussion:

-Kabini has shared virtual (not physical) memory address space, still NUMA.
-Kaveri will have shared physical memory address space, so it will be hUMA.

Then PS4 is more like Kaveri than Kabini ?.
 
Last edited by a moderator:
The DUH-D1000AA prototype Development Kit for PS4

http://www.engadget.com/2013/07/16/sony-ps4-development-kit-fcc/

Sony proudly showed off its PlayStation 4 hardware for the first time at E3, and now we're getting a peek at what developers are working with this generation thanks to the FCC. The DUH-D1000AA prototype Development Kit for PS4 is listed in these documents, tested for its Bluetooth and 802.11 b/g/n WiFi radios. As one would expect, the diagrams show it eschews the sleek design of the consumer model for extra cooling, a shape made for rack mounts plus extra indicator lights and ports. Also of note is a "max clock frequency" listing of 2.75GHz, and although we don't know how fast the game system will run by default, it's interesting to hear what all that silicon may be capable of (as a commenter points out below, that may relate to the system's 8GB of GDDR5 RAM) while maintaining a temperature between 5 and 35 degrees celsius. Hit the link below to check out the documents for yourself, after seeing this and the system's controller become a part of the FCC's database all we're left waiting for is Mark Cerny's baby.
 
Per earlier in the thread:

The 2.75GHz "maximum clock frequency in the system" is clearly the WCLK of the GDDR5 running at the announced 5.5 Gbps. GDDR5 needs two clock signals to work, one at 1.375GHz and the other one at the mentioned 2.75GHz (for 5.5Gbps operation), which is likely the highest clock in the system, not the CPU clock.

The FCC is more interested in the radio interference from the device, and at what frequencies. The CPU (or other component's) clock speed is not interesting to the FCC in the same way it would be to most of us.
 
Also the "while maintaining a temperature between 5 and 35 degrees celsius" is just nonsense, what that says is the unit is designed to function in ambient temperatures between 5 and 35 degrees celsius, not that the units temperature would be 5 to 35 degrees celsius.
 
we should sign a petition that requires the FCC and other agencies to gather more interesting information!


for regulatory purposes entirely! ;)
 
we should sign a petition that requires the FCC and other agencies to gather more interesting information!


for regulatory purposes entirely! ;)

Yeah, very dangerously clocked cpus can open swormholes!.But maybe the FCC guys are Nintendo fans and dont give a shit about those things ;)
 
I had a look but couldn't see it. Which slide is it?

3Gb for OS reservations makes no sense:runaway:

It was strongly believed that when the target RAM was 4Gb, the OS reservation would be no more than 512mb. And after Sony bow to developer pressure to put more RAM in, they give just 1.5Gb of that extra 4Gb to games, and 2.5Gb to the OS?

That's ludicrous. Microsoft are doing something different, their 3Gb OS reservation may make sense. Sony are making what is primarily a games console.

Sure it makes no sense... even 1GB makes no sense.
512MB was already decided, and it was WAY MUCH MORE than what they really needed... PS3 has 32MB... so 256MB is really enough, it´s all the cpu ram in PS3. 512MB is already waaay too much room to grow, I´m 99% sure Sony will not double it to 1GB. The fact they have doubled total RAM from 4GB to 8GB changes nothing the fact that 512MB is way beyond their practical needs.
 
Status
Not open for further replies.
Back
Top