PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
Well because it does, be it memory chips or the memory controller, go head and compare discrete GPU cards available with ddr3 and gddr5. Thought we don't speak about a crazy different, usually those cards are low end so the tdp is not highto begin with.

No its does not. We are talking a couple watts.
 
Good interview, but DF missed the chance to ask about the PS4 OS memory footprint.
There was quite a lot not covered. Dunno if the interview was time limited to 10 minutes, maybe, or if some questions got a 'no comment'.

The AMD quote seemed a bit negative towards AMD IMO, which is perhaps refreshing if Cerny was being unintentionally honest...
The timelines are very important, because you might be working with the brightest people in the business but if their product doesn't come out in the specific year that you need it, you can't work with them.
...implies...
...so you work with a less clever, less talented bunch of people who aren't doing anything fancy but at least can deliver on time.
He certainly didn't say, "AMD are the best business partner. They have the best designers and are working on the best CPU/GPU integration. We considered them the only choice regards a partner who can deliver a robust, economical, powerful, and flexible part."
 
That reads closely to the speculation we had going around here ages back that there was extra ALU for compute.

Sweetvar's 'Scalar ALU’s 320'

My old post about it: http://forum.beyond3d.com/showpost.php?p=1699565&postcount=154

320 / 4 = 80
So 80 ALU's per each of the CU's?
That would give an extra 16 ALU
And then 5 SIMD "blocks"
No. It looks like the most probable solution (as judged by reasonable people) was the right one. All this stupid "sweetvar's 320 scalar ALUs" was and still is just that: bullshit. :rolleyes:
 
No its does not. We are talking a couple watts.
Well it used to make a difference if you look at cards as those.
Now HD7750 ddr3 only save 3 Watts vs the gddr5, though it is as far as I know the first time I see a card that behaves that way. AMD could have done improvements to their memory controller, GDDR5 could have caught up with DDR3 wrt power (/same process), or those cards are junk grade kind bins that requires a beefy voltage to operate at the usual HD 7750 clock speed.

I looked at Nvidia side and the result is the same (as with the 7750) if you compare the gtx 640 and 650. Not exactely the same clocks (GPU or Memory), one as twice the memory but there is a significant difference in power burnt: here and here (by the Nvidia TDP figures are not as useful as AMD ones).

I guess I could find more example as manufacturers has that nasty habit of selling junk card with DDR3 (more of it) to attract uneducated buyers...
 
Digital Foundry: Going back to GPU compute for a moment, I wouldn't call it a rumour - it was more than that. There was a recommendation - a suggestion? - for 14 cores [GPU compute units] allocated to visuals and four to GPU compute...

Mark Cerny: That is bad leaks and not any sort form of formal evangelisation. The point is the hardware is intentionally not 100 per cent round. It has a little bit more ALU in it than if you were thinking strictly about graphics. As a result of that you have an opportunity, almost like an incentivisation, to use that ALU for GPGPU.

Interesting. Can't wait to see the teardown and X-ray photo.
 
No, he meant just a few more CUs.
A few more CUs in comparison to anything or a few more CUs they added to have more ALUs at work?

Judging by your words here it seems a very likely explanation that they modified the GPU in some ways to make it more accessible for GPGPU tasks, and if 3dilettante said what he said... it could sound pretty spot on.

http://forum.beyond3d.com/showthread.php?p=1704870#post1704870

It is like a more refined approach than just using 4CUs for specific tasks when they could be needed for graphics.

Still, the theoretical advantage of the PS4 in the console realm fro the next gen, where it is far away from other consoles, hasn't been proven in games, so developers still have a long way to go.
 
Last edited by a moderator:
...implies...
Does not compute. I don't see how his quote implies anything of the sort; that's just your own bias talking. :p

Rather, I think the only thing you can say that it does imply is that AMD was able to deliver what Sony wanted, and others were not. How does that imply that they are less talented and whatnot?
 
Doesnt that where onion and onion+ buses come into play? Onion+ bypasses the gpu cache.

No Onion+ exists to allow interactions between the CPU and GPU compute.
What I'm saying is even compute tasks read and write memory, so even if the ALU's are free, the compute jobs impact both the GPU caches and the memory bandwidth. That impacts running graphics tasks regardless.
Not to say it isn't nice to have, just that it's never "free".
 
I would love to know how much money Sony paid to AMD for the design of the console.

Given the potential payoffs of a console win with Sony I would guess Sony didn't have to pay much at all.

Keep in mind Sony went with AMD at least a year before MS did. That's why Sony's solution is a bit better than Microsoft's. Sony helped design the GDDR5 memory controller and MS does not have access to it.

In fact I'm willing to bet that AMD's UMA technology is a direct result of it's collaboration with Sony.
 
Given the potential payoffs of a console win with Sony I would guess Sony didn't have to pay much at all.

Keep in mind Sony went with AMD at least a year before MS did. That's why Sony's solution is a bit better than Microsoft's. Sony helped design the GDDR5 memory controller and MS does not have access to it.

In fact I'm willing to bet that AMD's UMA technology is a direct result of it's collaboration with Sony.

This may be correct. Every major change sony made to the gpu, amd has added those changes to it processors.
 
Given the potential payoffs of a console win with Sony I would guess Sony didn't have to pay much at all.

Keep in mind Sony went with AMD at least a year before MS did. That's why Sony's solution is a bit better than Microsoft's. Sony helped design the GDDR5 memory controller and MS does not have access to it.

In fact I'm willing to bet that AMD's UMA technology is a direct result of it's collaboration with Sony.
?
Where did you hear that Sony helped design the memory controller? I don't recall hearing anything like that.
 
I seriously can't stop being amazed how people think there's some "special sauce" within the GCN cores themselves - both XB1 and PS4 GPU-portions based on everything out there are standard GCN GPUs with somewhat "customized" parts simply based on the customizations GCN architecture allows itself, like the amount of ACEs in the frontend.

Only question remaining really is whether the CU/TU/ROP ratios on both follow the "PC GPU trend" or if either one has more/less than usual of some compared to rest

edit: and same goes for CPU cores, only "custom stuff" seems to be SHAPE & PS4 audioblock, MOVE-engines & eSRAM and possibly the PS4 "enhanced Onion/Garlic buses" (if they're even that, some of the other claimed "customizations" turned out to be standard stuff, too)
 
I seriously can't stop being amazed how people think there's some "special sauce" within the GCN cores themselves - both XB1 and PS4 GPU-portions based on everything out there are standard GCN GPUs with somewhat "customized" parts simply based on the customizations GCN architecture allows itself, like the amount of ACEs in the frontend.

Only question remaining really is whether the CU/TU/ROP ratios on both follow the "PC GPU trend" or if either one has more/less than usual of some compared to rest

edit: and same goes for CPU cores, only "custom stuff" seems to be SHAPE & PS4 audioblock, MOVE-engines & eSRAM and possibly the PS4 "enhanced Onion/Garlic buses" (if they're even that, some of the other claimed "customizations" turned out to be standard stuff, too)

Who is talking about special sauce?

the charges sony made have been detailed.

The three "major modifications" Sony did to the architecture to support this vision are as follows, in Cerny's words:

"First, we added another bus to the GPU that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches. As a result, if the data that's being passed back and forth between CPU and GPU is small, you don't have issues with synchronization between them anymore. And by small, I just mean small in next-gen terms. We can pass almost 20 gigabytes a second down that bus. That's not very small in today’s terms -- it’s larger than the PCIe on most PCs!

"Next, to support the case where you want to use the GPU L2 cache simultaneously for both graphics processing and asynchronous compute, we have added a bit in the tags of the cache lines, we call it the 'volatile' bit. You can then selectively mark all accesses by compute as 'volatile,' and when it's time for compute to read from system memory, it can invalidate, selectively, the lines it uses in the L2. When it comes time to write back the results, it can write back selectively the lines that it uses. This innovation allows compute to use the GPU L2 cache and perform the required operations without significantly impacting the graphics operations going on at the same time -- in other words, it radically reduces the overhead of running compute and graphics together on the GPU."

Thirdly, said Cerny, "The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we’ve worked with AMD to increase the limit to 64 sources of compute commands -- the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that's in the system."
 
Given the potential payoffs of a console win with Sony I would guess Sony didn't have to pay much at all.

Keep in mind Sony went with AMD at least a year before MS did. That's why Sony's solution is a bit better than Microsoft's. Sony helped design the GDDR5 memory controller and MS does not have access to it.

In fact I'm willing to bet that AMD's UMA technology is a direct result of it's collaboration with Sony.
Well, of course, yeah... Microsoft mentioned that they paid 3 billion to AMD, and when you have those example numbers there has to be something wrong with those maths.

It is impossible Sony got the tech for free or something similar. Your numbers are accounting for a lot of factors.

In the end, AMD are reaping the rewards but they have been paid for their tech and now they will be paid for every unit sold, I think.
 
Given the potential payoffs of a console win with Sony I would guess Sony didn't have to pay much at all.

Keep in mind Sony went with AMD at least a year before MS did. That's why Sony's solution is a bit better than Microsoft's. Sony helped design the GDDR5 memory controller and MS does not have access to it.

In fact I'm willing to bet that AMD's UMA technology is a direct result of it's collaboration with Sony.

The thought that AMD, which had already been making GPUs using GDDR5 and already had UMA technology in the works, was incapable of moving forward without Sony is just inconceivable. Thanks for the laughs. :LOL:
 
Status
Not open for further replies.
Back
Top