PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
A whole reserved core seems good to me, as that's a reliable position for games and OS for delivering a predictable, smooth experience. PS3 had a reserved core, and there are 8 of them in Orbis. I've been saying elsewhere, with GPU handling heavy compute functions, and dedicated video -coding hardware, and an audio DSP, just how much work is there really for the CPU to do? So allocating a stable amount for OS functions in a predictable multitasking OS is acceptable IMO. Although in this age of unstable framerates, maybe juddery interfaces and game stutters are fine by consumers? :p

At the same time, there's the streaming button on the PS4's controller... Would it make sense to have a reserved core partially intended for that? While I wouldn't be surprised if there are some little ASICs or blocks here and there designed to help with that, not all of it can be taken up by custom hardware.
 
Does BSD reserve a full core for anything ? If they pause all apps, and leave background tasks to the second chip, do they still need a full OS core ?

Security ?
 
At the same time, there's the streaming button on the PS4's controller... Would it make sense to have a reserved core partially intended for that? While I wouldn't be surprised if there are some little ASICs or blocks here and there designed to help with that, not all of it can be taken up by custom hardware.

The Share button's functionality is pretty much entirely covered by the dedicated video encoding hardware and the southbridge's custom chip. Encoder to handle the video (live stream, clip upload), custom chip to handle the background upload. We have pretty solid information directly from Sony (like the Cerny Watch Impress interview) about this.
 
At the same time, there's the streaming button on the PS4's controller... Would it make sense to have a reserved core partially intended for that? While I wouldn't be surprised if there are some little ASICs or blocks here and there designed to help with that, not all of it can be taken up by custom hardware.
If it's streaming the final display output, there could be almost no additional involvement of the main CPU cores, but things could easily be arranged differently.
 
http://www.gamasutra.com/view/news/189368/

In a forthcoming article, Gamasutra will share the many details of the PlayStation 4's architecture and design that came to light during this extensive and highly technical conversation.

Does anyone know when the above article will be posted? I hope it clears up a few issues. For example, the whole debate about additional ALUs, the capabilities of the ARM cpu, Etc.
 
Someone said it will take several weeks to confirm the tech details. They didn't promise any date at all.

I am most curious about expandability. ^_^

Edit: Vita reserves 1 core and 256MB RAM for the OS but it only uses parts of BSD. PS4 OS is said to be based on BSD.
 
A whole reserved core seems good to me, as that's a reliable position for games and OS for delivering a predictable, smooth experience. PS3 had a reserved core, and there are 8 of them in Orbis. I've been saying elsewhere, with GPU handling heavy compute functions, and dedicated video -coding hardware, and an audio DSP, just how much work is there really for the CPU to do? So allocating a stable amount for OS functions in a predictable multitasking OS is acceptable IMO. Although in this age of unstable framerates, maybe juddery interfaces and game stutters are fine by consumers? :p

There are 8 new GPU compute pipes according to the leaks. It possibly corresponds to the 8 CPU cores (for feeding them).

Cerny spoke about maximizing compute power.
If I were to hazard a guess, Sony may not reserve a core in the traditional sense. BSD probably just schedule the threads in SMP manner.

The secondary chip should be able to help in very specific ways (to keep common background tasks out).
 
Although I agree, the issue comes in trying to 'cost' the RAM usage of apps on a home console. There's just less going to be going on. You're not going to have a calculator, a night-sky browser, a comic photo editor, and all the other apps that you'd have on a tablet concurrent on a PS4, IMO. People already have pads and phones for that stuff. PS4 will need communications infrastructure for sure (web, chat, videochat), and a music player for custom soundtracks, and that sort of thing, but that stuff can be multitasked on a 256 MB phone. I think that's the comparison Joe Gamer will make when reading stuff on forums, but I doubt an extra 512 MBs reserved is going to make any noticeable difference to games so I don't see a problem myself. Would be nice to have a breakdown of RAM utilisation though, just for interest.

I am hoping for tighter web and game integration ! e.g. Taking a map from Google Maps or OpenStreet maps for a driving + travel game.
 

Excellent interview.


OS will automatically cache the Bluray games on HDD when the game is not requesting contact to the BD drive. ---- Will those caches be permanent or will they be overwritten in time?

Suspend state can save the game progress by leaving the gddr5 ram pool powered while the APU is shut down. --- How power hungry will that be? Can they possibly downlock gddt5 ram down enough to meet "green" power laws of EU?
 
"If you look at the portion of the GPU available to compute throughout the frame, it varies dramatically from instant to instant. For example, something like opaque shadow map rendering doesn't even use a pixel shader, it’s entirely done by vertex shaders and the rasterization hardware -- so graphics aren't using most of the 1.8 teraflops of ALU available in the CUs. Times like that during the game frame are an opportunity to say, 'Okay, all that compute you wanted to do, turn it up to 11 now.'"
Some one explain this.?
 
The launch lineup for PlayStation 4 -- though I unfortunately can’t give the title count -- is going to be stronger than any prior PlayStation hardware. And that's a result of that familiarity

So happy to hear that...and hopefully it's not just PR talk.
 
Will those caches be permanent or will they be overwritten in time?
We would have to assume they will get overwritten eventually, wouldn't we. :) HDDs are rewrite-capable, as well as of finite size, so could not hold an unlimited amount of games... Possibly the game could prompt the user to delete something if the system runs out of space, but if the console caches games without intervention we would have to assume it also cleans up after itself dynamically and automatically, methinks.

Can they possibly downlock gddt5 ram down enough to meet "green" power laws of EU?
The RAM would probably not be clocked at all, and just set to "self-refresh", and that would be incredibly frugal on power (handful of milliwatts if that much.) Standard DDR can do this, and seeing as framebuffers also have contents that need to be preserved when a computer put into standby mode, I would think GDDR RAM retains this ability as well...
 
Some one explain this.?

It's basically like any other process. Some operations leave other parts of the hardware idle/stalled. In the case the math heavy ALU's. It sounds eseentially akin to the same \idea as hyperthreading. Essentially trying to use the maximum amount of resources for more time. But it probably means that GPU is going to be pushing a bit more heat then quivalent designs in the Computer realm atm.
 
Some one explain this.?
when not using ALUs for graphics work, they can be used for compute. Although it's a bit weird him saying opaque shadowmaps use vertex shaders on a unified shader architecture. The ALUs are still going to be tied up processing the workload. Maybe the texture units will be sat idle, but good luck running compute on those. ;) Now if we were still on discrete shaders then it'd make sense, turning the pixel shaders to compute while the vertex units are working, but we're not.

Any dev on this board with experience of a US architecture will be well capable to tell us exactly how much idle time the GPU experiences (Sebbbi? Joker?) that could be spent on compute. Until they interject, I'm inclined to believe that a GPU's ALUs are rarely idle between jobs, and any compute will be taking away from the graphics task that could be done instead. It'll be a compromise use of resources as ever.
 
It confirms a lot, including the compute rings being custom, simultaneous compute and graphics with almost however compromise you want, the RAM not being on the chip (thus questioning if we can still call it APU) and A LOT more...

Sure we can call them that, APUs dont need to have any RAM in their structure [only CPU/GPU caches].
 
Read the article, and in truth there isn't a great deal of new info. The main message is that Sony have worked with AMD to customise the GPU to add compute in a big way. There are 64 compute threads available to populate with jobs, and Sony have developed a system based on their experience with SPURS to manage thread allocation in hardware and software (firmware). So the hardware seems very much AMD with a few Sony tweaks (eg. cache bypassing for CPU<>GPU communication), and Sony software. I liked the idea of using compute to thin out vertex work. Compute should indeed help with advanced graphics optimisation (although then is it really compute, or graphics work? Is it not time to give up on the distinctions entirely?). Other topics like the audio DSP were just brushed lightly, so we've no idea what that does or doesn't bring to the hardware in specifications.
 
It confirms a lot, including the compute rings being custom, simultaneous compute and graphics with almost however compromise you want, the RAM not being on the chip (thus questioning if we can still call it APU) and A LOT more...
I think you're confusing APU and SOC. Lots of people do.
 
Status
Not open for further replies.
Back
Top