Wouldn't it still be a GNB if it has a GPU in the DPU?
I don't know man, it just seems like, if everything is everything, then nothing means anything... you know what I mean?
Wouldn't it still be a GNB if it has a GPU in the DPU?
If a DPU can do everything a Northbridge needs to (can it? I find reference to one company using Tensilica DPUs as memory controllers for flash and SSDs, but that's all), and you added a GPU into it, then I guess so. But that's clearly not what's happening in PS4. All the information fits together without any holes requiring explanation via an unannounced DPU being present. AMD furninshed Sony (and MS) with an APU based on their PC architecture, consisting of an 8 core Jag CPU and a GNB that includes 18 CUs (or 12) and a few customised functional units as typically part of AMD's APU design (DMA units, mem controller, video block, audio block). There are no additional graphics capabilities beyond the 18 CUs and CPU and video plane hardware. There's no extensive programmability in the functional units beyond the DSP's capabilties or whatever either. There's a Southbridge with low power ARM embedded for background tasks. There certainly isn't any additional processing power available for physics or AI or image processing. If there were, the very public developer documentation would tell us.Wouldn't it still be a GNB if it has a GPU in the DPU?
If a DPU can do everything a Northbridge needs to (can it? I find reference to one company using Tensilica DPUs as memory controllers for flash and SSDs, but that's all), and you added a GPU into it, then I guess so. But that's clearly not what's happening in PS4. All the information fits together without any holes requiring explanation via an unannounced DPU being present. AMD furninshed Sony (and MS) with an APU based on their PC architecture, consisting of an 8 core Jag CPU and a GNB that includes 18 CUs (or 12) and a few customised functional units as typically part of AMD's APU design (DMA units, mem controller, video block, audio block). There are no additional graphics capabilities beyond the 18 CUs and CPU and video plane hardware. There's no extensive programmability in the functional units beyond the DSP's capabilties or whatever either. There's a Southbridge with low power ARM embedded for background tasks. There certainly isn't any additional processing power available for physics or AI or image processing. If there were, the very public developer documentation would tell us.
But they'd waste another block of silicon for it? What?Think about how you can do background removal & silly effects on your live stream while playing games, I don't think they would just waste that type of CPU/GPU processing power for something not many people will use.
But they'd waste another block of silicon for it? What?
Also, greenscreening can be done dirt cheap.
What I find interesting from the AMD leak that lead to a lawsuit is that Sony originally intended to use a 4 core cpu clocked at 3.2ghz with 2-4GB ram, rather than an 8core at 1.6ghz, AND Sony pulled back their launch and instead decided to use an identical system to Microsoft as well as have alot of free space reserved for non-game apps just like Microsoft.
If Sony had chose to stick with the 4 GB of RAM and 4 cores @ 3.2GHz there would probably be alot more distinguishing itself from Microsofts system. Probably alot more games with an even larger performance gap in Sony's favor. How the loss of 1-2GB of unified memory would impact games might impact some multiplatform games such as open world ones visually or with more load times. At the same time it might not as developers may design the games to have parity across platforms.
I've learned more watching Dora with my daughter.I'm not sure if this thread is genius trolling or hopelessly misguided. Whatever it is is not that funny though.
what are the numbers on the vertical axis supposed to represent
I tried to source this graph because there was some necessary information lacking to be able to interpret it properly. I was able to find that the unit of measure is millijoules/frame, but didn't see anywhere where the specific noise reduction algorithm was detailed or what the specific CPU and GPU were that were being used for the comparison. That is all important to know if you want to determine if this is a typical efficiency improvement or a corner-case representing the best-case scenario which, given that this graph is from Cadence themselves, is a valid concern. They *are* trying to sell you on the product, after all.
You say this like it's a bad thing.This whole thread is basically a long semantic argument about what a "DPU" is vs a DSP, GPU or CPU.
This whole thread is basically a long semantic argument about what a "DPU" is vs a DSP, GPU or CPU.
Stop.A GPU is a DPU that was made for Geometry.
I'll explain: Devs learned how to make triangles so engineers made a special processor for it that was fast at delivering triangles. now think of all the other things that devs have learned to do with a CPU but need a faster way to deliver it. that's what the DPUs are for.
You're just making up definitions now, which perhaps explains why no-one can follow your reasoning. A DPU isn't a generic accelerator and you shouldn't be referring to accelerators as DPUs. If you want to speed up raytracing, adding a raytracing accelerator doesn't mean adding a DPU (although if you can add a DPU as a class of Tensilica architecture and have that do the job). Same with AI or video decoding. These accelerators will conform to a processor architecure as defined by whatever taxonomy one uses, for which I don't think there's an officially accepted one. So some accelerators will be DSPs, and others will be ultra wide SIMD processors, etc.A GPU is a DPU that was made for Geometry.