Original PS4 has 256MB for the same effect, which is a small cache for app swapping (not for actively running apps in the background as you and other have suggested), and it's a measly single DDR3 chip using a 16bit bus, so 3.2GB/s maximum, connected through the southbridge.
No. It's right in the text! That 256 MB was *not* being used for that purpose in PS4.
"On a standard model, if you're switching between an application, such as Netflix, and a game, Netflix is still in system memory even when you're playing the game. We use that architecture because it allows for a very quick swap between applications. Nothing needs to be loaded, it's already in memory."
Some might say it's an extravagant use of the PS4's fast GDDR5 memory, so the extra DDR3 memory in the Pro is used to store non-critical apps, opening up more RAM for game developers.
The 256MB in the PS4 is only there to support the secondary SoC because it has to have SOME memory to operate (it runs it's own instance of FreeBSD and has to be able to run with the rest of the system shut down). Since the bus already existed and the PS4's main SoC could access it through the secondary SoC they used that existing bus as a cheap way to add additional memory to the system. They did this because 8GB was not enough. If 8GB is not enough, then it doesn't really matter if you need 8.5 GB or 11.9 GB of capacity, you still need to find a way to add more and in the case of XBOne -> Scorpio this required them to add some kind of bus. They chose to kill two birds with one stone and address capacity and bandwidth with the same solution. Since the capacity is there, I'm quite certain developers will find some way to use it.
Yet you didn't know the additional RAM chip already existed in the original PS4, nor what it was meant for.
I did, actually, on both counts and now you do too, hopefully.
Lol yes, you should send an e-mail to nvidia saying their GTX 1070 "makes no sense" and they should've created the GP104 with a 384bit bus instead.
They'll learn a lot from your wisdom.
Didn't say that. I said creating a memory controller that supported GDDR5X just to have a differentiator didn't make sense. There are easier and cheaper ways to gimp your lesser SKUs, the most obvious one being to just use slower memory. I think you've got the tail wagging the dog here. Decision was made to use GDDR5X for 1080 because that was the best choice for that product. 1070 could have 6GB of GDDR5X on a narrower bus or 8GB of slower GDDR5X on a 256-bit bus, which kind of defeats the purpose of using GDD5X, so to get 8GB of capacity but at a lower memory spec 8GB of GDDR5 makes the most sense and allows you to use the same board for both.