Revenge of Cell, the DPU Rises *spawn*

Status
Not open for further replies.
Wouldn't it still be a GNB if it has a GPU in the DPU?
If a DPU can do everything a Northbridge needs to (can it? I find reference to one company using Tensilica DPUs as memory controllers for flash and SSDs, but that's all), and you added a GPU into it, then I guess so. But that's clearly not what's happening in PS4. All the information fits together without any holes requiring explanation via an unannounced DPU being present. AMD furninshed Sony (and MS) with an APU based on their PC architecture, consisting of an 8 core Jag CPU and a GNB that includes 18 CUs (or 12) and a few customised functional units as typically part of AMD's APU design (DMA units, mem controller, video block, audio block). There are no additional graphics capabilities beyond the 18 CUs and CPU and video plane hardware. There's no extensive programmability in the functional units beyond the DSP's capabilties or whatever either. There's a Southbridge with low power ARM embedded for background tasks. There certainly isn't any additional processing power available for physics or AI or image processing. If there were, the very public developer documentation would tell us.
 
If a DPU can do everything a Northbridge needs to (can it? I find reference to one company using Tensilica DPUs as memory controllers for flash and SSDs, but that's all), and you added a GPU into it, then I guess so. But that's clearly not what's happening in PS4. All the information fits together without any holes requiring explanation via an unannounced DPU being present. AMD furninshed Sony (and MS) with an APU based on their PC architecture, consisting of an 8 core Jag CPU and a GNB that includes 18 CUs (or 12) and a few customised functional units as typically part of AMD's APU design (DMA units, mem controller, video block, audio block). There are no additional graphics capabilities beyond the 18 CUs and CPU and video plane hardware. There's no extensive programmability in the functional units beyond the DSP's capabilties or whatever either. There's a Southbridge with low power ARM embedded for background tasks. There certainly isn't any additional processing power available for physics or AI or image processing. If there were, the very public developer documentation would tell us.

If it's for the OS the devs wouldn't really need to know about it.
 
Think about how you can do background removal & silly effects on your live stream while playing games, I don't think they would just waste that type of CPU/GPU processing power for something not many people will use.
 
Think about how you can do background removal & silly effects on your live stream while playing games, I don't think they would just waste that type of CPU/GPU processing power for something not many people will use.
But they'd waste another block of silicon for it? What?

Also, greenscreening can be done dirt cheap.
 
What I find interesting from the AMD leak that lead to a lawsuit is that Sony originally intended to use a 4 core cpu clocked at 3.2ghz with 2-4GB ram, rather than an 8core at 1.6ghz, AND Sony pulled back their launch and instead decided to use an identical system to Microsoft as well as have alot of free space reserved for non-game apps just like Microsoft.

If Sony had chose to stick with the 4 GB of RAM and 4 cores @ 3.2GHz there would probably be alot more distinguishing itself from Microsofts system. Probably alot more games with an even larger performance gap in Sony's favor. How the loss of 1-2GB of unified memory would impact games might impact some multiplatform games such as open world ones visually or with more load times. At the same time it might not as developers may design the games to have parity across platforms.
 
But they'd waste another block of silicon for it? What?

Also, greenscreening can be done dirt cheap.

You're right how silly of engineers to waste silicon on a IPU/VPU for Image processing /computer vision , that sounds just as crazy as wasting silicon on a GPU for graphics.
It's not like they have a glowing controller & VR headset to track & facial recognition or anything like that.

Edit: you should probably call AMD & tell them before they waste silicon on their new GPUs adding a VPU like the Vision P5 for AR/VR acceleration.

AGUcdzR.png




What I find interesting from the AMD leak that lead to a lawsuit is that Sony originally intended to use a 4 core cpu clocked at 3.2ghz with 2-4GB ram, rather than an 8core at 1.6ghz, AND Sony pulled back their launch and instead decided to use an identical system to Microsoft as well as have alot of free space reserved for non-game apps just like Microsoft.

If Sony had chose to stick with the 4 GB of RAM and 4 cores @ 3.2GHz there would probably be alot more distinguishing itself from Microsofts system. Probably alot more games with an even larger performance gap in Sony's favor. How the loss of 1-2GB of unified memory would impact games might impact some multiplatform games such as open world ones visually or with more load times. At the same time it might not as developers may design the games to have parity across platforms.

Steamroller (I think that's what it was) wasn't ready & also there was some laws about console power usage being thrown around & I think that had something to do with the consoles using low power CPU's.
 
Last edited:

I tried to source this graph because there was some necessary information lacking to be able to interpret it properly. I was able to find that the unit of measure is millijoules/frame, but didn't see anywhere where the specific noise reduction algorithm was detailed or what the specific CPU and GPU were that were being used for the comparison. That is all important to know if you want to determine if this is a typical efficiency improvement or a corner-case representing the best-case scenario which, given that this graph is from Cadence themselves, is a valid concern. They *are* trying to sell you on the product, after all.
 
I tried to source this graph because there was some necessary information lacking to be able to interpret it properly. I was able to find that the unit of measure is millijoules/frame, but didn't see anywhere where the specific noise reduction algorithm was detailed or what the specific CPU and GPU were that were being used for the comparison. That is all important to know if you want to determine if this is a typical efficiency improvement or a corner-case representing the best-case scenario which, given that this graph is from Cadence themselves, is a valid concern. They *are* trying to sell you on the product, after all.

It's from a Vision P5 white paper http://ip.cadence.com/uploads/899/Tensilica_Vision_P5_WP_Final_100515-pdf
 
I bet it's AI that has been fed some technical literature, web sites and discussions. It's not trolling, it's trying to mimick an engineer. That's pretty much only way I can imagine this thread making any sense.
 
This whole thread is basically a long semantic argument about what a "DPU" is vs a DSP, GPU or CPU.
You say this like it's a bad thing.

Technically this is not really a thread, it's a spawn. Some rules are suspended. Origins of it's creation are fuzzy. Context was lost and forgotten. Moderators cannot spawn a spawn (it would be bad). But most important of all, the title is funny.
 
This whole thread is basically a long semantic argument about what a "DPU" is vs a DSP, GPU or CPU.

A GPU is a DPU that was made for Geometry.

I'll explain: Devs learned how to make triangles so engineers made a special processor for it that was fast at delivering triangles. now think of all the other things that devs have learned to do with a CPU but need a faster way to deliver it. that's what the DPUs are for.
 
A GPU is a DPU that was made for Geometry.

I'll explain: Devs learned how to make triangles so engineers made a special processor for it that was fast at delivering triangles. now think of all the other things that devs have learned to do with a CPU but need a faster way to deliver it. that's what the DPUs are for.
Stop.

Math has always been the foundational core of all computer anything. Triangles are no exception. Triangles were selected for a variety of reasons over quads for instance. Alongside the GPU we had and still have Larger register extensions. Before that we had math coprocessors. Your explanation sucks. GPUs are good at large matrices math. Even then, there are still particular math functions it will not be good at with certain hardware.

You're all over the place, I don't even need to read anything of what you've written to know that. There are many things we've done that can be done differently but that doesn't necessarily mean it's going to be faster or slower for that matter. Things have become the way they are as a proper form of evolution over time and we can tie a lot of decisions back to costs.
 
A GPU is a DPU that was made for Geometry.
You're just making up definitions now, which perhaps explains why no-one can follow your reasoning. A DPU isn't a generic accelerator and you shouldn't be referring to accelerators as DPUs. If you want to speed up raytracing, adding a raytracing accelerator doesn't mean adding a DPU (although if you can add a DPU as a class of Tensilica architecture and have that do the job). Same with AI or video decoding. These accelerators will conform to a processor architecure as defined by whatever taxonomy one uses, for which I don't think there's an officially accepted one. So some accelerators will be DSPs, and others will be ultra wide SIMD processors, etc.

Considering people asked for a definition of DPU earlier to be able to enter into this discussion, and you left it to others of us before offering your definition which none of us could guess, it's no wonder this is another pointless thread that no-one can do anything with.
 
Status
Not open for further replies.
Back
Top