Predict: The Next Generation Console Tech

Status
Not open for further replies.
7870 is dead on 6970 performance, They won't need 6990's in the dev kits because they the dev kits won't have the API Windows crap stopping them.
If they have 6990s in the dev kits, would that mean they're going for 7950/7970 level performance then? :p
Purposely setting myself up for massive disappoint when the real specs are released.
 
right, a 200W GPU in a console. it could be liquid cooled and that would be fun, but a tad expensive.

My assumption is that you don't need a 200W GPU in a console to achieve the same results as a 200W GPU in a pc.

Wasn't Xenos competitive with R600 cards that had much hire TDP?
 
If they already have a design for a node, you don't just shift it to the new node and call it a day.

Is it so shocking that IBM announced 45nm for their WiiU design back in June? They may have 32nm, but it's not exactly primetime for anyone but Intel.
I'd assume that if MS/Sony are planning on 2012, they have a design for 32nm which is a full 8x increase in density from 90nm launch o xb360/ps3. As you said, half node from there shouldn't be too difficult so going from 32nm design to 28nm fab should be doable.

As for binning etc, perhaps this is where the dual GPU rumors are coming from. Two smaller GPUs with a smaller cpu produce better yields than fewer larger chips.
With the GPU expecting to offload some work from the CPU anyway, it might make sense to have it compartmentalized into two functioning units, perhaps with their own cache/memory pools.


As for Nintendo and why they are going with 45nm ... they are using ancient technology in their cpu/gpu so it doesn't surprise me that they would be more comfortable in using an older process lithography because they don't have the issues of being large, hot, expensive chips so the desire/need for a smaller process for higher performance and smaller/cheaper chip isn't there.
 
Last edited by a moderator:
You really think they will be off the shelf parts? I don't think either of them would be sharing their customized part with their competitor. They might be more similar than last gen, but they won't be the same.
 
I'd assume that if MS/Sony are planning on 2012, they have a design for 32nm which is a full 8x increase in density from 90nm launch o xb360/ps3. As you said, half node from there shouldn't be too difficult so going from 32nm design to 28nm fab should be doable.

I don't really see the point in a 32nm node for either of them. It buys you nothing but the expense of a node shrink and may be problematic to boot. There's a reason amd and nvidia both skipped it. With how 20nm is shaping up, 28nm may be the only node for either this nextgen.

As to 28nm being too immature a process, maybe Sony/MS weren't leading the way with 65nm but they were definitely in the first adoptor's club. And Sony was in production at 45nm a mear 6 months after its release (MS would've been there too had it not been for their decision to go to xcgpu which was a much bigger risk than an immature node). It seems fairly appearent that neither would be detered by a new-ish process especially if the one doing the design (AMD/IBM) is already producing at that node.

As for binning etc, perhaps this is where the dual GPU rumors are coming from. Two smaller GPUs with a smaller cpu produce better yields than fewer larger chips.

With the GPU expecting to offload some work from the CPU anyway, it might make sense to have it compartmentalized into two functioning units, perhaps with their own cache/memory pools.
I would add that a split gpu would also make it easier if they were to go to a 3D stack as part of a "slim" model later on, especially if there's no node shrink in the plans. It seems GCN would be a good candidate for this with the modular CU's. One question I do have though is with GCN to inactivate a CU (for binning/yield) do you need to inactivate the whole quad? It seems like a huge waste if you do.
 
You really think they will be off the shelf parts? I don't think either of them would be sharing their customized part with their competitor. They might be more similar than last gen, but they won't be the same.

Not share, and both will have their own personality, but when you go down the checklist it's going to be pretty damn close/indistinguishable.

Now one difference that could be possible is the GPU/Memory/Interposer we've seen recently. If one went with that setup while the other didn't, even if the gpu was the exact same chip, you could see a huge difference in real performance.
 
Assuming 32/28nm launch in 2012 would yield 8x trans count, this would amount to a budget of roughly 4billion (497m x8 = 3,976m) if we are to assume equal budget/process node.

This leads to some pretty interesting potential hardware:

With that budget, MS could extend the xb360 architecture to the following:

10MB EDRam (100m) => 60MB EDRam (600m) Enough for a full 1080p frame buffer with 4xaa

3 core xcpu (165m) => 9 core xcpu (495m) - or an upgraded 6 core PPE with OoOe and larger cache along with an ARM core (13m trans)

This leaves a hefty 2.8b trans available for xgpu which could accommodate 3x AMD ~6770 (1040m) ~3 teraflops or 4x AMD ~6670 (716m) ~2.8 teraflops.

Such small, modular chips would enable good yields on new(er) processes until they were mature enough to combine together and eventually integrate to an APU.
 
Last edited by a moderator:
It seems very unlikely that there will be multiple GPUs in a gaming console.

3 core xcpu (165m) => 9 core xcpu (495m) - or an upgraded 6 core PPE with OoOe and larger cache along with an ARM core (13m trans)

What is the ARM core for?
 
fat G-buffer DS
1920*1080*4xMSAA*4buffers*8bytes per pixel (FP16) is already 253MB.
1920*1080*4xMSAA*4bytes per pixel (32bpp Z) is 31MB.

LPP
Normals (FP16) + depth (32bpp), 4xMSAA -> 95MB

Divide by 2.25 for 720p
 
fat G-buffer DS
1920*1080*4xMSAA*4buffers*8bytes per pixel (FP16) is already 253MB.
1920*1080*4xMSAA*4bytes per pixel (32bpp Z) is 31MB.

LPP
Normals (FP16) + depth (32bpp), 4xMSAA -> 95MB

Divide by 2.25 for 720p

fairly impossible indeed 0_0
modern and emerging rendering tech have made some progress in the tbr?
 
Sony's next GPU will be Nvidia. Microsoft will be AMD. At least until proven otherwise.

It was released MONTHS ago that AMD is doing the GPU and not Nvidia.

Just one of the sites on it, http://www.tomsguide.com/us/Cell-GPU-Bulldozer-Wii-U-PlayStation,news-11809.html

AMD are a lot better then Nvidia when it comes to performance per watt which is why it makes sense that AMD are doing the console GPU's
 
Status
Not open for further replies.
Back
Top