Inuhanyou
Veteran
Ah i see, thank you for the information.GDDR3 is derived from DDR2 and GDDR5 from DDR3. In a nutshell, it's optimized to achieve higher frequencies (and thus provide more bandwidth) at the expense of worse latency. When MS used one unified pool of GDDR3 they were favoring GPU performance (particularly texture fetches) over CPU performance (really high L2 cache miss penalty). although they may have thought that streaming loads on CPU would dominate and benefit from the improved bandwidth too.
At this point as i've heard so many conflicting things about Wii U's hardware potential from so many sources, its a bit overwhelming. One thing is clear though, that Nintendo obviously prioritized being apart of the current console market in terms of hardware as opposed to aiming for some sort of sit in between the current HD twins and the next set.
I'm assuming that their DX10 equivalent feature set will get them some part of the way in terms of effects and shader rendering, maybe they figured that would be enough to split the difference?
I'm not sure how big a difference using DX10 equivalent effects versus 360 and PS3's DX9 equivalent will make honestly. And that's before we get to the little issue of still having no idea about the actual bandwidth of the EDRAM or the general processing capabilities of the GPU.
I've heard people saying RV770, but isn't that a bit too high for what we've been seeing in games so far, even taking into account what some people would call the "quick and dirty port" nature of those games?
I mean we are all pretty much focusing in on the fact that the GPU is most likely the beefiest portion of the Wii U at this point, so to actually get only a marginal improvement from games like Wii U even with the substantially bigger amount of EDRAM in comparison to 360 says in my mind that it can't be that much more powerful.
Would not an RV730 or RV740 die shrink'd be more appropriate given that they are also apparently trying to fit the EDRAM onto the GPU die as well?
Sorry if that sounds a bit ignorant, as i'm not sure how clock speeds or modern efficiencies could tie into that kind of performance.