I am locking forward to the next generation and then looking over this thread and so see how far of/close the predictions are/where.
Did anybody predict the PS3/X360/Wii before it was leaked/released?
I am locking forward to the next generation and then looking over this thread and so see how far of/close the predictions are/where.
Did anybody predict the PS3/X360/Wii before it was leaked/released?
He's basing that off a vliw4 7950 and not a gcn which we now know to be true. A Pitcairn Pro will have 20 CU's or 1280 ALU's (The Pitcairn XT will have 24CU/1536ALU and the Pitcairn LE likely be a downclocked Pro). Now unless the 7850 is just a down clocked 7950, which we have zero reason to believe, that'd mean it'll be a 16CU/1024ALU or 12CU/768ALU gcn or maybe a 1152ALU vliw. The chances of a console going with a full set of CU's in a gcn without any harvesting is highly highly unlikely. That means your most likely console amd gpu will have 20CU/1280ALU's and likely lower clocked(800mhz) too.
bgassassin said:This is an abbreviated prediction I made a couple weeks ago.
Xbox3
- 6-core "Xenon" 3.5Ghz
- 16-20 Compute Units (1024-1280 ALUs) 800+Mhz
- 2GB GDDR5
PS4
- 4 or 6-core (AMD-based) 3.2Ghz
- 16-20 Compute Units (1024-1280 ALUs) 800+Mhz
- 2GB GDDR5
Wii U
- 3-core (POWER7-based) 3.5Ghz
- 640-800 ALUs (don't know which architecture) 600-800Mhz
- 1.5GB GDDR5, 32MB 1T-SRAM
I don't think there will be too much of a difference between Xbox3 and PS4. Also while I have them at 2GB of memory, I see them going with 4GB if there is an increase in GDDR5 density.
You mean illegitimate? Perhaps, but here is the source.
It's also to be considered that current high-end GPUs have still an amount of fixed-function units that could be removed, or reduced. It cannot be done in the PC world yet, because it would lead to performance catastrophe with current and "old" games, but in a console it's possible. Let's remove the ROPs and rasterize the graphic within the shader-core, and further reduce the TMU:shader ratio.
It's also to be considered that current high-end GPUs have still an amount of fixed-function units that could be removed, or reduced. It cannot be done in the PC world yet, because it would lead to performance catastrophe with current and "old" games, but in a console it's possible. Let's remove the ROPs and rasterize the graphic within the shader-core, and further reduce the TMU:shader ratio.
The above speculation doesn't take into account the relevancy of the CPU. If the CPU will be less relevant than before, then it certainly won't be 176mm^2 like it was originally. You could put a bit more into the GPU still.
Haven't various devs suggested there would be many more cores than this gen? And could it be feasible to have 2GB of DDR3 or another cheaper type of memory alongside the GDDR5?
GCN in its current form would not be the one to do this. The graphics export path gets its own bus to the ROPs, which saves a lot of traffic over the read/write cache. The rasterization component is not signficant in size, but a CU or quad of CUs is.
Changing the TMU count may be marginal. The texture path is on the general memory path to the L1, so a lot of the hardware used by them is going to stay in place regardless.
Let's wait and see how it goes, but another IMO, I hope something like a 64MB ( sharing cpu and gpu) eDRAM 512GB/sec almost like L3 cache power7* and no more than 1.5GB RAM (GDDR3 or 1T-SRAM) at 32GB/sec.
*http://www.7-cpu.com/cpu/Power7.html