That's probably the lowest hanging fruit, just re-using the base RV740 design (although it's more hairy than that, of course).
However, I'm having trouble seeing them provide even remotely adequate bandwidth for such a wide beast, irrespective of what they'd do with regards to memory.
GDDR5 raises some interesting challenges, at least in its current implementation in ATI memory controllers, one being that down-clocking it when idle is not possible in a seamless way
500MHz is what I've been told. Also, DDR3. But like I said: No idea how reliable that information is.hmm... 80W under load @ 750 core. What's the power consumption like at say 600 or 650MHz (+ accompanying voltage adjustment).
I'd almost prefer 480SPs so they can still keep higher clocks for say... geometry, but who knows what the goal is exactly. As we all know, anything above 320SPs already significantly outclasses current gen in shading ability, but if the clocks are gimped, then geometry rate suffers.
He was talking about the devkit before the current one, and the way I understand some of the statements earlier this month, there have been quite a few (significant?) changes with the latest revision. And I don't even think developers have target specs for the final hardware, either.DDR3 would explain quite a lot at E3.
500MHz sounds a bit surprising, and I'm sure that's still one of the unsettled specs. What was worrying was the reports of overheating dev kits. Granted, an RV740 shoved into that chassis would have really awful cooling compared to the stock PC card, but that's why I wanted to know what the power characteristics were for lower clocked iterations.
Just keep in mind that launching on 28nm, a new and certainly immature process, isn't going to be cheap.
For comparison's sake, TSMC had 65nm at the end of 2005, yet we didn't see its use in consoles or even other GPU markets until mid to late 2007. It's not like the complexity of the smaller process nodes is going to make the transition to 28nm any smoother for mass production - just take a look at how long 28nm has been delayed in the first place.
That's not what I meant. I'm fully aware of RSX's GPU being G7x. I'm referring to how these console chips have many extras integrated. For one, they all serve as a northbridge. I don't know what else is in Xenos and RSX but Hollywood has an ARM CPU for system and security tasks and an audio DSP.You might actually consider that RSX is in the same family architecturally as NV4x...
There's an AMBA bus and an AMBA to AMBA bridge onthe Wii U GPU die, so there will certainly be an ARM core as well.That's not what I meant. I'm fully aware of RSX's GPU being G7x. I'm referring to how these console chips have many extras integrated. For one, they all serve as a northbridge. I don't know what else is in Xenos and RSX but Hollywood has an ARM CPU for system and security tasks and an audio DSP.
NV2A was a GeForce 4 Ti but it still was also a northbridge. A super nForce IGP if you will.
The only off the shelf choice I could see Nintendo using would be Llano because of its already high integration.
What I don't get is why the GPU is not integrated as it is pretty tiny?RV730 at 40nm should have a size of 72mm^2, the same one of the GPU inside Hollywood, in the other part memory controller, I/O controller and eDRAM could be integrated.
TSMC can fab eDRAM as well. It was actually cheaper for Microsoft as NEC was charging more IIRC.(which limit the possible providers to NEC, right?)