I don't think that's the right way to look at it. RAM is just cache between the data we want (on storage) and where it needs to be to be processed (register files in the processors). Presently we have maybe 8 GBs of data in RAM caching what's on the drive, and something like 8 - 32 MBs of L3 cache on the CPUs. That L3 cache is tiny but effective because at any given moment, the CPU doesn't need access to all 8 GBs of data but only a few kbs to feed the L1 and register files, so the caches are just prefetching, just enough to keep those L1/registers populated and CPUs (thanks to multithreading) are kept busy with high utilisation.The more you rely on streaming from the SSD, the more you are limited by the SSD performance.
Ideally, we'd have all 80 GBs in L1 cache at 4 cycles access, but that's impossible. So we use a small cache to keep it fast and feed that from a bigger bucket.
Ideally, we'd have all 80 GBs in L2 cache at 10 cycles access, but that's impossible. So we use a small cache to keep it fast and feed that from a bigger bucket.
Ideally, we'd have all 80 GBs in L3 cache at 100 cycles access, but that's impossible. So we use a small cache to keep it fast and feed that from a bigger bucket.
Ideally, we'd have all 80 GBs in DDR at 15 ns cycles access, but that's expensive. So we use a smaller pool keep it affordable and feed that from a bigger bucket.
Each pool is only as large as it needs to be to account for the slowness of the next pool. The only reason for large RAM is because the final step, storage, is so incredibly slow it needs an epic pool to basically cache all the data. Once that's fast enough, pressure on the RAM cache is greatly reduced and we can think of it more in terms of the other cache topology, where 1/10th the data size is a huge amount of caching, and quite frankly wasteful but needed because at the moment software design is around using the RAM as the storage instead of a cache.