Predict: The Next Generation Console Tech

Status
Not open for further replies.
In regards to GPU bus width what you have to think is that back when PS3/360 were released 256bit buses were still pretty much exclusive to all the high cards and 128bit was deemed as 'mid-range'

5-6 years later and that's not the case any more, Nearly all the graphics cards in the modern 'mid-range' price bracket now all have 256bit buses, Except for the odd few cards that have been released on borked dies to fill a set price gap ( GTX 460 768Mb comes to mind )

I would really be surprised if the consoles didn't at least have GPU's with 256bit buses.
 
I know, If it started production in December 2011 and is a custom chip you can pretty much guarantee that Microsoft had this chip designed along side the desktop version and made at the same time.

Quesion is, What did they do to make it 'custom'

What could they of removed and stripped out that was not needed for a console GPU, Eyefinity logic? And what could they have added?

Well, if it's indeed SoC, it's of course "custom design", since it's being integrated with PPC CPU-cores. The memory controllers would be different, too, and it most likely would house all the rest southbridge features aswell.
 
In regards to GPU bus width what you have to think is that back when PS3/360 were released 256bit buses were still pretty much exclusive to all the high cards and 128bit was deemed as 'mid-range'

5-6 years later and that's not the case any more, Nearly all the graphics cards in the modern 'mid-range' price bracket now all have 256bit buses, Except for the odd few cards that have been released on borked dies to fill a set price gap ( GTX 460 768Mb comes to mind )
Well, just because the definition of mid/high end changed, doesn't change the reality of how much the chips will cost (by way of minimum die perimeter as dictated by the bus width). A 200mm^2 chip is still a 200 mm^2 chip.
 
A while back I asked if large SOCs (400mm ^2) were possible for the next Xbox. The sentiment then was that the heat generated would be impossible to handle. Given the recent talks of 500-600mm^2 SOCs in the newest semi-accurate rumors, does the sentiment still hold up? It seemed like posts a few pages back were quick accept such possibilities. Quite a change from a few months ago.

Does this mean that an eventual leak for the 200W TDP figure for the GPU alone would be quickly accepted without much questioning? :p
 
If they're limited to a 128 bit GPU bus can't they have split memory pools and add another 2GB of RAM connected to the CPU? Or is that precluded due to it being a SoC?
 
If they're limited to a 128 bit GPU bus can't they have split memory pools and add another 2GB of RAM connected to the CPU? Or is that precluded due to it being a SoC?

Sure they could do (2) 128bit busses ... one for the CPU one for the GPU.

... or they could just do one 256bit bus and save the developers a few headaches by having a UMA. ;)

The costs would be roughly the same but with a bit of savings in the UMA design by not requiring two mem controllers.
 
Worst case, yes. But the buffers should be highly compressable. Most pixels will only have data from one fragment.

Cheers

Are you confident current hardware do not allocate/reserve that amount of memory for the back buffer ? (TBDR excluded)
 
Sure they could do (2) 128bit busses ... one for the CPU one for the GPU.

... or they could just do one 256bit bus and save the developers a few headaches by having a UMA. ;)

The costs would be roughly the same but with a bit of savings in the UMA design by not requiring two mem controllers.

And double the bandwidth at the same time;)
 
You're super sampling.

Cheers

Storage wise, MSAA is no different from SSAA. That is, MSAA stores colour and Z samples then determines triangle coverage. The difference with SSAA is that it multiplies the texture sampling per pixel whereas MSAA doesn't. The point of MSAA is that it's a special case of supersampling and it uses coverage/depth to effectively work only on polygon edges.

The texture sampling is not supersampled, so in-polygon pixels aren't affected. Since you've got per pixel shaders potentially using texture ops, you're not supersampling that either. That's why brute force SSAA can be so expensive for performance - you've got N texture samples per pixel and thus N x shader texture ops as well as increased texture bandwidth demands. You've got AF for more efficient sampling of the textures anyway although that clearly has little benefits for the shading. Increased texture sampling requires more bandwidth, not storage in memory, since you're just sampling the textures that already exist there.
 
Going back to this article.

http://semiaccurate.com/2011/10/27/amd-far-future-prototype-gpu-pictured/

What's the difficulty / degree of posibility that you can also squeeze in a CPU and up-gun the ram to 2-4 gigs of GDDR5 (cooling such massive amounts of stacked memory would be an issue). The Edram is probably not necessary because of the high bandwidth, it's like having several gigs of Edram already unless I am mistaken.

Such a SOC (MCM?) would probably be huge.
 
Going back to this article.

http://semiaccurate.com/2011/10/27/amd-far-future-prototype-gpu-pictured/

What's the difficulty / degree of posibility that you can also squeeze in a CPU and up-gun the ram to 2-4 gigs of GDDR5 (cooling such massive amounts of stacked memory would be an issue). The Edram is probably not necessary because of the high bandwidth, it's like having several gigs of Edram already unless I am mistaken.

Such a SOC (MCM?) would probably be huge.

The 360 Slim is practically already there. The ROPs/Z units aren't a crazy amount of space compared to the eDRAM. Move those functional units back into the GPU, get rid of the GDDR3 and eDRAM I/O and put in a 128-bit GDDR5 bus, and voila, an SoC under 180mm^2 (guesstimate). Attach 8x2Gbit GDDR5.

What sort of specs are you talking about though? Anything is possible at zombo.com. :p

A 256-bit GDDR5 bus will need a bigger chip design of course. 16x2Gbit chips for 4GB of RAM. Don't think 4Gbit is on any horizon in the near future so...
 
The 360 Slim is practically already there. The ROPs/Z units aren't a crazy amount of space compared to the eDRAM. Move those functional units back into the GPU, get rid of the GDDR3 and eDRAM I/O and put in a 128-bit GDDR5 bus, and voila, an SoC under 180mm^2 (guesstimate). Attach 8x2Gbit GDDR5.

What sort of specs are you talking about though? Anything is possible at zombo.com. :p

A 256-bit GDDR5 bus will need a bigger chip design of course. 16x2Gbit chips for 4GB of RAM. Don't think 4Gbit is on any horizon in the near future so...

Multicore ppc with smt.
2 gb gddr5
100w AMD GPU
 
Last edited by a moderator:
Can Oban simply be a new 360 revision semiaccurately considered a new product?

That was my initial thought, and well... semiaccurate even said as much the first time they reported it back in August. :rolleyes: Then he did some switcheroo a few months later by saying it was an Xbox Next chip. Maybe he just misunderstood "next xbox chip" as "xbox next chip" See the difference... :rolleyes:

Anyways, an SoC on day 1 doesn't really make sense for a high powered next gen console.
 
Put a 360SS chip into some smart TVs; offer TV makers a cut of Live sale & sub monies from those units. Maybe build Kinect in too.

Hell, it'd be a better way to differentiate than making yet another device with Youtube support, and that'll stop getting new widgetapps after 18 uninteresting months.
 
Status
Not open for further replies.
Back
Top