Predict: The Next Generation Console Tech

Status
Not open for further replies.
you can build such a kind of PC, it will be useful for GPGPU, though not for graphics.

have a look at this : the "fastra" computer from University of Antwerp was/is a phenom with four 9800GX2 cards, meant as a cheap "desktop supercomputer".
http://fastra.ua.ac.be/en/index.html

I remember when those guys were building the PC they asked on THG (back when I posted there) which PSU would power 4 9800GX2s. I was like wtf but recommended the 1500W Thermaltake model. Lo and behold that's what they ended up with, surely because of my most helpful suggestion :LOL:
 
Well, I'd wonder if there would be much savings by customizing the chip away from GPGPU features such as Double Precision... This is afterall a gaming system.
 
Well, I'd wonder if there would be much savings by customizing the chip away from GPGPU features such as Double Precision... This is afterall a gaming system.

Try telling that to the PS3F@H crowd, for some reason, people now see F@H as a benchmark :/
 
An ATI rep. pimped the Wii's graphical capabilities pre-launch; it means nothing. I'm in agreement with corduroygt, I don't think we'll see anything much beyond a 5850, at least in terms of die size, though I'm sure with specific tweaks and a newer architecture they can eke out some more efficiencey.

So 2 billion transistors and a ~ 3 teraflop chip is the sort of ballpark I'd be expecting, that's still going to be pretty big for a console chip, even at 22nm, expecting anything more out of a small $300 box is just wishful thinking.

Interesting, honestly i did not think about that (just hype) also i did not know about ati hyping of the wii's graphical abilities. Thanks for the info.
 
Well, I'd wonder if there would be much savings by customizing the chip away from GPGPU features such as Double Precision... This is afterall a gaming system.

I wouldn't say there would be much in it. Juniper has 1.04B transistors for half the specifications as Cypress which has 2.15B. Is there really much in the double precision itself? Because it really seems to be an addon to the single-precision stream processors really. Theres the other stuff like the UVD, but in saying that they would probably want to keep that to lower the CPU useage for decoding h.264 etc.
 
But to answer the question though, GPGPU (General Purpose GPU computing) is the trend of offloading traditionally CPU-bound tasks onto the GPU. GPUs increasingly have logic built in to support a more general instruction set, and are thus ideally suited for computational tasks that benefit from high degrees of parallelism. The way to think of it for those who keep their focus more squarely on the modern console space might be to say that the space for which Cell's SPE's were highly suited, is also the space that modern GPUs are also increasingly competitive in.

While you give a good generalizing summary, the original question is still relevant - do GPGPU features make sense from a total cost perspective on a console? Arguably, the major reason they are there at all on desktop GPUs is that it allowed nVidia/ATI to attempt to expand their business opportunities while letting PC gamers subsidize the development of the necessary hardware and software platforms.

As a computational scientist, I find this both funny and deeply ironic.

My take is that the console space needs efficiency. Hardware cost go beyond gate count and IC cost: cooling solutions, power supplies and cabinets, packaging, warehousing and transportation costs, et cetera all point to a strong interest to also keep absolute power draw down even besides the obvious that a lot of consumers don't want their entertainment devices to call attention to themselves. The factors motivating the development we've seen in PC space don't exist in console space, in fact there are very good reasons to avoid any size/power/feature bloat.

However, you have to take R&D costs into account, as well as the effort and cost involved in supplying effective software development tools. Not to mention development time/time to market. If we see any GPGPU features in consoles, this is where you'll find the reason IMO - using devices that are close to what the graphics suppliers had lying around anyway bring cost and efficiency savings as well, only in other places.
 
the original question is still relevant - do GPGPU features make sense from a total cost perspective on a console?

The original question was simply: what is GPGPU? That's the question I answered.

The question you're answering is something else entirely. But in the vein of that particular question, my take is this: 'CPU' in terms of next-gen will be simply the number of transistors required to be dedicated as such in order to handle the expected non-graphics workload. Where those transistors are located becomes less important. I could make an argument for a larger GPU drawing from die size that would otherwise have gone to the CPU if the transition was towards GPGPU functionality which would handle certain expected loads that would otherwise have necessitated a more robust CPU. And then you have the additional benefit as well ofl flexibility on the GPU should the extent of said compute power not be needed to refocus towards graphics in a given title. Not that additional variables don't come into play there though... But I think an argument for GPGPU functionality certainly exists, it is simply about load balancing across transistor/die budgets in the way that gives the most flexibility/greatest efficiency.

And that's not to say that I think GPGPU is thus a lock - just that it has a case. APUs on the CPU, hybrid dies/processors, LRB style 'uni' architectures; anything and everything has a case to be made. What the makers will decide on will be determined by what a 'snapshot' in time looks like architecturally speaking at the time they really need to lock these things down by. With so many significant architecture arches about to take flight amongst all major semi players, that could end up being any of a number of things.
 
Going back to the Wii and current rumours.

What would be required as a bare minimum to make a 100% backwards compatible (to the Wii) Wii HD? And by bare minimum I mean adding no additional hardware features other than dealing with the increased pixel resolution and frame buffer size whilst pushing the exact same amount image fidelity (e.g. 1m polygons per frame with all effects at 480p - 720p and 1080p).

We are talking about going from 480p (640x480) to 720p (1280x720) or to 1080p (1920x1080).

Actual pixels:

640x480 = 307200
1280x720 = 921600
1920x1080 = 2073600

3.0x increase to 720p
6.75x increase to 1080p

If Nintendo just increased clockspeeds again then:

CPU:

729x3 = 2187
729x6.75 = 4920.75

GPU:

243x3 = 729
243x6.75 = 1640.25

Just from the above information it will not be possible to increase clockspeeds to desired levels and therefore I conclude that even a Wii HD would require a new CPU and GPU architecture.

Thank you for your time. ;)
 
720p via clockepeed and memory looks doable for Wii HD, they can just upscale afterward to 1080p. Afterall that's what 360 is doing while PS3 just output 720p. I don't think you need to increase CPU clock by the same scale as the GPU.
 
Going back to the Wii and current rumours.

What would be required as a bare minimum to make a 100% backwards compatible (to the Wii) Wii HD? And by bare minimum I mean adding no additional hardware features other than dealing with the increased pixel resolution and frame buffer size whilst pushing the exact same amount image fidelity (e.g. 1m polygons per frame with all effects at 480p - 720p and 1080p).

We are talking about going from 480p (640x480) to 720p (1280x720) or to 1080p (1920x1080).

Actual pixels:

640x480 = 307200
1280x720 = 921600
1920x1080 = 2073600

3.0x increase to 720p
6.75x increase to 1080p

If Nintendo just increased clockspeeds again then:

CPU:

729x3 = 2187
729x6.75 = 4920.75

GPU:

243x3 = 729
243x6.75 = 1640.25

Just from the above information it will not be possible to increase clockspeeds to desired levels and therefore I conclude that even a Wii HD would require a new CPU and GPU architecture.

Thank you for your time. ;)

They could probably just expand the 1T-SRAM on the GPU enough to fit 720p in with some AA, like 24MB or so, add more channels/bus width to it, and then take the same chip and add 12 more pixel pipelines and call it day, if they wanted the least possible investment. I don't think they will do that though, I think they will go with something cheap and off-the-shelf, like AMD's new Fusion product, seems like a perfect fit. It would be stupid to stick with 11 year old tech at this point when they could get something cheap enough for a $200 console that is 20x+ times faster nowadays.
 
Would it be more expensive to add more pixel pipelines to the Wii GPU terms of research and development than using for example an embedded 4690 (or NVIDIA equivalent) or something like the the Llano APU?

I think it is quite likely Nintendo will stick with IBM for the CPU - cuts developer costs.

A general question, what would happen if you got the original Flipper and then just kept on adding more cores to it on current level node processes? Is it really very inefficient, impractical or hard to engineer with the proper scaling?
 
I am sure that if the Nvidia-Nintendo deal is true then we could expect that the next console from Nintendo will use the ION platform. Remember that in Nintendo they are fans of Memory Controller+GPU in a single chip solution.

At the end of this year we should have news about the third generation ION and it should have the same performance than a GF8600, more than enough for making direct HD ports from PS3 and 360 to Wii adapted to the motion controller and with a very low cost.

Nintendo ever finishes their consoles one year before the release of the system, if Nintendo is able to put ION4 then it will be a very good job in timing from Nvidia.
 
ION2 is 32 shaders if rumours from summer 2009 are to be believed built on 40nm technology from everyones favourite fab plant.
If it is similar to GT218 then it is a 22W part with 32 CUDA shaders and a 64bit bus width with support for DDR3.

Definitely a viable alternative for Nintendo if they wish to stick with their current form factor but I do not think emulation is an area where Nintendo would want to head - Sony gave up on it probably due to the resources required to keep it viable. It seems backwards compatibility is a problem, one reason the technical hurdles, another cost (hardware and ongoing software support) and because it may discourage owners from buying new games for their shiny new consoles.
 
Last edited by a moderator:
Definitely a viable alternative for Nintendo if they wish to stick with their current form factor but I do not think emulation is an area where Nintendo would want to head - Sony gave up on it probably due to the resources required to keep it viable. It seems backwards compatibility is a problem, one reason the technical hurdles, another cost (hardware and ongoing software support) and because it may discourage owners from buying new games for their shiny new consoles.

I think Nintendo actually wants to drop BC, Console sales have been Stellar with the Wii but software seems mediocre outside of a few big titles. What more than Wii Sports does a casual gamer need?
 
ION2 is 32 shaders if rumours from summer 2009 are to be believed built on 40nm technology from everyones favourite fab plant.
If it is similar to GT218 then it is a 22W part with 32 CUDA shaders and a 64bit bus width with support for DDR3.

Definitely a viable alternative for Nintendo if they wish to stick with their current form factor but I do not think emulation is an area where Nintendo would want to head - Sony gave up on it probably due to the resources required to keep it viable. It seems backwards compatibility is a problem, one reason the technical hurdles, another cost (hardware and ongoing software support) and because it may discourage owners from buying new games for their shiny new consoles.

ION2 is not a chipset at all, it's GT218 which means 16 shader units and 64bit DDR2-bus (or at least there as far as I know isn't any GDDR3 variants of GT218 out there?)
 
Status
Not open for further replies.
Back
Top