PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
It should be GCN (as opposed to GCN2) right ?
What does the article say ?

Hopefully, the author will correct these errors.

EDIT:
The Nikkei one is interesting:
As for the "supercharged" parts, or the parts that SCE extended, he said, "There are many, but four of them are representative." They are (1) a structure that realizes high-speed data transmission between the CPU and GPU, (2) a structure that reduces the number of times that data is written back from the cache memory in the GPU, (3) a structure that enables to set priorities in multiple layers in regard to arithmetic and graphics processing and (4) a function to make the CPU take over the preprocessing to be conducted by the GPU.

(3) and (4) are intriguing.
 
It should be GCN (as opposed to GCN2) right ?
What does the article say ?

Hopefully, the author will correct these errors.

That seems a bit muddled at the moment whether it is or is not (or for that matter, what exactly is considered GCN 2). Although they (Watch Impress) do try to make the distinction between GCN and GCN 2 in this article (also about the PS4 GPU, GCN in general, and a good read as well).
 
(3) and (4) are intriguing.
(3) sounds a lot like the much-talked-of recently durango display planes. ...Unless it's plain ole Z-buffering, but that wouldn't really be worth mentioning, seeing as it's literally 30 year old tech by now.
 
It's hard trying to interpret these interviews translated from Japanese. Waiting for the Gamasutra one.
In a forthcoming article, Gamasutra will share the many details of the PlayStation 4's architecture and design that came to light during this extensive and highly technical conversation.

It's great to see Sony/Cerny talk so much about PS4 tech though. Hope/assume MS follows suit soon, like they did in the early days of 360. Both these companies demonstrating to their credit, they arent Nintendo.
 
(3) sounds a lot like the much-talked-of recently durango display planes. ...Unless it's plain ole Z-buffering, but that wouldn't really be worth mentioning, seeing as it's literally 30 year old tech by now.

(3) he is talking about being able to run code at the same time on the same thread, & (4) is like the CPU playing the part of the PPU in the Cell & the GPGPU playing the part of the SPE's.




GPGPU+PS4+2.jpg
 
(3) sounds a lot like the much-talked-of recently durango display planes. ...Unless it's plain ole Z-buffering, but that wouldn't really be worth mentioning, seeing as it's literally 30 year old tech by now.

It's prioritizing arithmetic and graphics processing at different levels. They already refined input and cache management between the CPU and GPU. Probably want to extend developer control down to prioritized scheduling for real-time programming (like SPURS model !)


sounds way more like something along the lines of this IMO. I guess it comes down to what you call "preprocessing". Is it at the lowest level ie, getting data and instructions to the ALU for execution or something at a high level like a stage in OGL/DX/etc pipeline.

Yeah, basically an efficient mechanism for the CPU to pre-process data for the GPU. At this moment, we know they have 8 ring buffers per pipe. Perhaps some way to map and manipulate those buffers ?

EDIT: Can the CPU access the DMA units ?
 
Will the first generation of PS4 games be able to utilize these features? I find it hard to fathom the discrepancy between PS4 devkits and the final silicone.
 
Cerny said he doesn't expect developers to use the advanced features at launch. Heck, we don't even know if the tools are ready. :p
 
I think Knack is written on UE4. Hopefully some groundwork has been laid. It should at least get the ball rolling (i.e., compatible with UE4 technical approaches).
 
(3) he is talking about being able to run code at the same time on the same thread, & (4) is like the CPU playing the part of the PPU in the Cell & the GPGPU playing the part of the SPE's.





GPGPU+PS4+2.jpg

Wow that chart is very hard to miss interpret they really talk about running the maximum graphics the GPU can do while running at the same time compute jobs..

Is this really really really possible.?
 
Yes, Knack is UE4.

Could you post a source please, i haven't seen any proof of this yet.

Wow that chart is very hard to miss interpret they really talk about running the maximum graphics the GPU can do while running at the same time compute jobs..

Is this really really really possible.?

Wow if true, i wonder if dave can comment about it? AMD should be bragging about it(if true).
 
Wow that chart is very hard to miss interpret they really talk about running the maximum graphics the GPU can do while running at the same time compute jobs..

Is this really really really possible.?

maybe in the sense of grabbing normally unused resources, like hyperthreading in intel cpu's.

it would increase heat though, like furmark.

the 1.843 teraflops is a hard limit though, if that's what you mean.

I'm guessing the diagram is PC watch's interpretation of vague Sony statements, rather than factual.
 
Could you post a source please, i haven't seen any proof of this yet.

Here it is.


@Averagejoe

Cerny said:

-the GPU is modified to made compute easier.
-they made practical to use the GPU as general computation device
-task that can fully occupy the CPU cores will be achievable using just a fraction of the PS4 GPU
 
Last edited by a moderator:
Wow that chart is very hard to miss interpret they really talk about running the maximum graphics the GPU can do while running at the same time compute jobs..

Is this really really really possible.?

Nope, but I can see how the wording confuses. Analogy time.

Pretend you're the GPU, and you can juggle a few balls of 2 types.. graphics or compute. PS4 can do that in whatever combination with both hands.

But on PC, if you decide to juggle graphics and compute types at once you can only use 1 hand.
 
maybe in the sense of grabbing normally unused resources, like hyperthreading in intel cpu's.

it would increase heat though, like furmark.

the 1.843 teraflops is a hard limit though, if that's what you mean.

I'm guessing the diagram is PC watch's interpretation of vague Sony statements, rather than factual.

Yeah,by the graphics is still undecided 16 or 32 ROPs.
 
Wow that chart is very hard to miss interpret they really talk about running the maximum graphics the GPU can do while running at the same time compute jobs..

Is this really really really possible.?
No. I'm getting angry now that people keep asking the same question but ignoring the answer because it's not what they want to hear. That diagram spells it out plainly, saying the difference is a typical GPU has to switch from graphics to compute, whereas Fusion can swap to perform compute on the CUs at the same time (some CUs processing compute, some processing graphics). The compute explanation is even pointing at the same damned CUs as those labelled 1.843 TFlops. 18 CUs at 1.843 TFlops calculations. That's the peak limit for any calculations, whether those calculations are processing graphics or non-graphics tasks. If you are running compute at the same time, you are using some of those TFlops to calculate non-graphics tasks. It's there in plain print yet still some people, ignoring everything said before hand, are going with the misinterpretation. :nope:
 
Status
Not open for further replies.
Back
Top