insanepunkclown
Newcomer
I've been thinking, L2 cache on Wii U is eDRAM at 45nm and it has been proven via backward compatibility with Wii games/software and homebrew software to play Gamecube games on Wii U that latency of Wii U's CPU eDRAM L2 Cache is at least equal.
When 14nm matures enough for full scale mass production and eDRAM yields are acceptable or good, would it make sense to have an off die L2 DRAM cache? At 14nm node the latency could be acceptable for off die L2 cache eDRAM or more expensive SRAM or possibly if someone chooses to use 1T-SRAM.
Would be somehow financially make sense if someone made a viable design for mobile/handheld and even home consoles?
I am aware that Intel APU's with top of the line IRIS graphic chip have if I remember correctly 128MB L4 Cache that isn't embedded into CPU, but are on socket on an MCM(?) and is used as L4 cache for CPU and as GPU's VRAM(?)...
When 14nm matures enough for full scale mass production and eDRAM yields are acceptable or good, would it make sense to have an off die L2 DRAM cache? At 14nm node the latency could be acceptable for off die L2 cache eDRAM or more expensive SRAM or possibly if someone chooses to use 1T-SRAM.
Would be somehow financially make sense if someone made a viable design for mobile/handheld and even home consoles?
I am aware that Intel APU's with top of the line IRIS graphic chip have if I remember correctly 128MB L4 Cache that isn't embedded into CPU, but are on socket on an MCM(?) and is used as L4 cache for CPU and as GPU's VRAM(?)...