AMD/ATI for Xbox Next?

http://graphics.cs.williams.edu/archive/SweeneyHPG2009/TimHPG2009.pdf
The last slide clearly says that he is talking about memory bandwidth.
Anyway, RV870 has 1 Terabyte/s bw for texture fetch and 435 Gb/s bw from L1 to L2 cache.
Cache is not memory? He speaks about "effective bandwidth no external bandwidth. I don't think somebody at EPIC expect GPU external bandwidth to grow anywhere close to 4GB/s that's between 20 and 40 more than what possible now. Even for internal bandwidth it will be a tough figure to reach, even with better architecture it would imply a consistent bump in clock speed.
I find the 1TB/s in the B3D own review too:
B3D said:
This L1 is fully associative, and we're told it uses 256 bit cachelines, maintaining a cacheline size tradition that was started with the R300. You'll probably see a cute 1 TB/s fetch rate being quoted for the L1, but keep in mind that's aggregate, a sampler can only fetch from the L1 at a rate of ~54.4GB/s, and aggregate doesn't make much sense here since one sampler can't exactly fetch from another's L1.
54GB/s is a lot if you consider the clock speed, last Intel architecture seems to provide 50GB/s read and write it seems it can peak @~100GB/s (from here). I don't know if it can read and write at the same time (it's a bit over my head) and it's likely a best case scenario. That's for 3.2GHz part.
For RV8xx it's read only.
But more educated members could shime in in this regard ;)
 
Ultimately with next gen, if we are to make educated guestimates, we would have to layout what we thought their market approach is. e.g. A focus on BC or on 3D makes a big impact on the system design.

If MS were to focus on 3D Displays it would seem that this would open up an opportunity to leverage the yield advantage of 2 smaller GPUs instead of 1 larger GPU. As someone who wears glasses I am not so hip on 3D displays at this point, but if MS were to pitch a new console with 3D gaming (3D output, 3D Motion Controller+Camera) they very well could be targeting the mythical "casual" market.

Of course they may not go the 3D route, as the market may not be ready in 2012. Or that consumers get the impression that you *need* a 3D TV and thus cut off your userbase. So if they have 3D as a 2ndary feature they could go a different direction.
They should pass on 3D till TV are not able to deliver it without having to rely on glasses. It will have a significant impact on price (Nvidia glasses are not cheap) and Natal 2 will come already with its own cost.
For performances in fact it could be ok if a game render @1080p it could drop @720p (~half the pixels) in 3D mode (could be the same even if native resolution is not 1080p).
My belief is that this kind of feat should stay optional for now.
 
Even if 3d is available on next gen console which it looks like it might be its is going to be such a niche product that i cant see it being that important. How many home now have HDTVs? Maybe 40% if that , most of my mates still play their 360/ps3 on Sd tvs so i just dont see the big leap to 3d becoming mainstream for maybe 5-7 years if it takes off at all.
The next bug rush will be people currently on SDtv getting cheapo or not HDTVs . Maybe towards the middle of the next gen most player will be watching in HD, until then, 3d will be a tiny tiny percentage and i cant see the big players betting the farm on the extra processing power needed. Better to use that grunt to better use making games with far more detailed graphic, more shader work,effects etc. Make it look prettier rather than make it 3d would be my advice.

I mean for all Sony's banter at the start of this gen about True HD, how many native 1080p game are there? Wipeout and......................
 
Last edited by a moderator:
With the 360 MS moved away from the "PC in a nice small box" model, I cannot see them going back to that with a full AMD/ATI setup for the next one.
 
With the 360 MS moved away from the "PC in a nice small box" model, I cannot see them going back to that with a full AMD/ATI setup for the next one.

Why not? It totally depends on why they didnt go for a pc like setup with x360. I dont think that has anything to do with not wanting a pc like setup but more with AMD at the time probably not capable of delivering on the cpu side and not wanting to go with intel again.
 
With the 360 MS moved away from the "PC in a nice small box" model, I cannot see them going back to that with a full AMD/ATI setup for the next one.

Curious how the 360 is less of a "PC in a box" than the Xbox...

The 360 doesn't have x86, but that is really neither here nor there. There are advantaged/disadvantages, but the reason to not go with x86 wasn't purely to dump it (see: Bill Gates wanting to stick with Intel). IBM offered the specs MS wanted at the price they were willing to pay. Were AMD or Intel able to offer 3 core / 6 thread CPUs with high peak flops and 1MB of cache under 165mm2? Could MS then take the design to other fabs, control it in process shirnks, or use the design in future consoles if AMD/Intel offered the chip design?

If your comment was in regards to "custom" non-desktop parts, then there is no reason AMD cannot offer such. Bulldozer is specifically designed for flexible SKU production and ATI has already shown capable of creating additional GPU SKUs. Then again, there could be an advantage to AMD developing a GPU chip that would find itself in PCs.

I don't really understand your point, and the factors that drove MS to ATI and IBM last time don't preclude an all AMD machine.

The big thing keeping MS away from an AMD CPU would be cost, control of the ip to take it to competing fabs, peak flops (marketing; longterm potential), performance (heat, power, performance per mm2) etc. Piracy and BC would also be concerns. But there is no overring desire to not be a PC in a box outside of the fact that philosophy with the orginal Xbox was literarlly PC parts tossed in a box. That has less to do with the vendors and more to do with what MS put in.
 
The big thing keeping MS away from an AMD CPU would be cost, control of the ip to take it to competing fabs, peak flops (marketing; longterm potential), performance (heat, power, performance per mm2) etc.

So is that a guaranteed set of indicators that they won't be providing the CPU? ;)

With the trend in GPUs as of late, I wouldn't be surprised if they quoted those FLOPs instead of the CPU. :p
 
MS has done everything possible to try and make the xbox the center of your entertainment center - from Netflix, to Twitter and Facebook - they've been moving to keep you in front of your 360 more and more. I wouldn't be suprised to see a shift to more physical memory to help support things like a full internet browsing experience (even if modified down to disallow downloads and installation of other aps) - with compatability for services like youtube. That being said, I think that adding much more edram is only going to add way too much die size - and complex cooling solutions would only increase cost and looking at another "hot box" would scare the hell out of me after the RROD debacle that is the 360 - whether RROD's are directly related to heat dissapation is still up for debate (I believe there are other problems as well) - but heat can't help the situation.

I'd really like to see a shift back to a less complex GPU/CPU setup which might even be (gasp) passivly cooled - less moving parts in these boxes can't be a bad thing. with architecture starting to become final - I'm hoping we can be more effecient with less hardware to this end. At the very least - it'll be easier on my ears when I'm trying to play a game :p

God - too bad they're too expensive, I'd love to see some solid state HDD's in these new boxes - would like to see load times and install times almost totally elliminated

Jack
 
I'd really like to see a shift back to a less complex GPU/CPU setup which might even be (gasp) passivly cooled - less moving parts in these boxes can't be a bad thing. with architecture starting to become final - I'm hoping we can be more effecient with less hardware to this end. At the very least - it'll be easier on my ears when I'm trying to play a game :p

So what you are really saying is you want a WiiHD! :smile:

too bad they're too expensive, I'd love to see some solid state HDD's in these new boxes - would like to see load times and install times almost totally elliminated

Install times would be just as bad. Actually they will be worse assuming next gen games increase in size/complexity. Digital and Optical transfer speeds are going to be the bottleneck for install times.

As for load times, you note the cost. If they cannot afford more eDRAM there is no way to spend a huge % of the budget just to diminish load times--when indeed this is as much a designer issue as a hardware. Games can be loaded while still allowing the gamer to 'play' and with smart asset re-utilization as well as clever streaming load times can be dampened. Even a smart checkpoint system that has check points within the current buffer can help here.

To tame load times I would vote more memory (e.g. 4GB instead of 2GB) and tell developers, "Look, (a) do something about initial load times with your game design and (b) use the extra memory for aggressive streaming. Use more clever tricks like load time procedural generation, etc." Better yet, provide the tools. There are some amazing shader based texture and asset generation tools out there. You still need system memory, but I would rather see an important, flexible resource like system memory get a bump instead of a more limited resource like a SSD.
 
So what you are really saying is you want a WiiHD! :smile:

Well...No... :)

But I would like to see a shift away from some of the crazy active cooling methods that I see us moving towards. What I don't want to see is us gettnig to the point that consoles are using some wierd form of liquid cooling or a phase setup :p - that's just what I need, an environment control box around my console

:p

Install times would be just as bad. Actually they will be worse assuming next gen games increase in size/complexity. Digital and Optical transfer speeds are going to be the bottleneck for install times.

You know, I didn't think about it that way - so on a completely different topic - "how long do you think it'll be before we go 100% digital distribution?" We seem to be moving in that direction - I just doubt it'll be the next generation of consoles where the shift occurs.

As for load times, you note the cost. If they cannot afford more eDRAM there is no way to spend a huge % of the budget just to diminish load times--when indeed this is as much a designer issue as a hardware. Games can be loaded while still allowing the gamer to 'play' and with smart asset re-utilization as well as clever streaming load times can be dampened. Even a smart checkpoint system that has check points within the current buffer can help here.

To tame load times I would vote more memory (e.g. 4GB instead of 2GB) and tell developers, "Look, (a) do something about initial load times with your game design and (b) use the extra memory for aggressive streaming. Use more clever tricks like load time procedural generation, etc." Better yet, provide the tools. There are some amazing shader based texture and asset generation tools out there. You still need system memory, but I would rather see an important, flexible resource like system memory get a bump instead of a more limited resource like a SSD.

Yeah - I get that good design really hides load times - this has been done successfully since the PS2 (thinking God of War) - I'd just like to get to the point that it was a non-issue. I think more memory is the answer for games, and as I stated earlier, will help with web-browsing and extra-game media content

Jack
 
Good, so we are all on board for 8GB of system memory :devilish:

Edit: Seriously, I get the feeling we will be "lucky" to get 2GB instead of 4GB. So I saw we set the fence at 8GB and "compromise" to 2GB. As much as I would like to see the new consoles be around a long time I get the feeling budgets and process uncertainty and consumer ficklness will dictate more "reasonable" approaches which result in large compromises. Getting a platform that is a big leap over the current gen, accessible for great launch titles, and a ton of potential under the hood to wow consumers years after release and not to crimp developer imaginations is a hefty task.
 
If the extra 256MB cost MS ~$1 billion this gen, how much would 4-8GB cost in the future?

I really hope MS and Sony don't abandon edge tech and go the Wii route, but something tells me that the bean counters will have even more say this next gen.
 
If the extra 256MB cost MS ~$1 billion this gen, how much would 4-8GB cost in the future?

It will mainly come down to the number of chips, not dissimilar from other semiconductor fabrication costs. The roadmap for say GDDR5 currently specs out to 6/7Gbps per pin (96-112GB/s on 128-bit), but the density is so far only at 1Gbit per chip. A 2Gbit density is in the works, but that might be awhile... to use the latest fab process that is, but should be here by the time the next generation arrives. That basically means 8 DRAM chips @ 2GB system memory. Make of that what you will. ;)


edit: bandwidth could double up again with differential signalling... double wires though... some pad-limit and cost implications there (motherboard complexity)
 
Looking at current GPUs I don't think that BW is going to cut it for a technologically progressive GPU unless something fundamental changes. Are we really considering shipping a console in 2012 with most sales occuring in 2013-2016 with about 100GB/s of bandwidth? The PS3 has a collective 50GB/s.

If the next console is GPU centric a 350mm2 GPU, and the following shrink, would appear to be workable with a 256bit bus. The memory bandwidth needs to come from somewhere because we are looking at many-core CPUs needing BW as well. I guess that is an arguement for eDRAM... or XDR2.

I don't see a console of relative market performance as the Xbox and Xbox 360 were at time of launch being very impressive with 100GB/s of bandwidth for nearly a dozen CPU "cores" and a 300mm2+ GPU and aiming for 2xMSAA at 1080p at 16bit.

I guess there is local store and eDRAM for a reason...
 
I really hope MS and Sony don't abandon edge tech and go the Wii route, but something tells me that the bean counters will have even more say this next gen.
I agree with this. If AMD helps Microsoft put both the Xenos and Xenon both into the same die(supposedly a goal Microsoft already has in mind.) do you think that they would be able to do the same with thier next gen machine? With that said would't that enable the chips to have a fat pipe between each other without having to worry the problems current die shrinks face or am I mistaken.
 
Looking at current GPUs I don't think that BW is going to cut it for a technologically progressive GPU unless something fundamental changes. Are we really considering shipping a console in 2012 with most sales occuring in 2013-2016 with about 100GB/s of bandwidth? The PS3 has a collective 50GB/s.

Well that's the thing about Xenos' eDRAM. There's a fundamental physical limit to balance there with die size, future shrinks, pad limits, # of chips, wire tracing, motherboard complexity etc.

The bandwidth issue just keeps getting worse because they don't scale on the same level. Now in a lot of situations, it probably won't matter, but then you should have a GPU that isn't wasting transistors with a zillion ROPs with 8x multisamples per cycle or something ridiculous. :p

Then again, with MRTs and deferred rendering techniques, FP16/32 becoming popular, the bandwidth issue compounds... There are shortcuts with 32bpp HDR formats though...

But... ALUs/math don't take much if any bandwidth... shader AA? edge-detect? etc... Could be a decent option for some developers, some who already do that on Xenos... This hardware aspect will be more important in the end...

After a generation of getting to grips with tiling... it'd be interesting to see how much of an issue there will be with geometry reprocessing especially with a more robust tessellation pipeline, assuming it actually gains traction.

If not, well... triangle: pixel ratios will be better at 1080p than 720p... A few devs are already dealing with 3+ tiles for 4xMSAA, and there's dynamic AA too...

I'm actually still wondering why the daughter die seems to have remained at 80nm when there is 55nm available right now. Could be the interface between the two chips.
 
do you think that they would be able to do the same with thier next gen machine? With that said would't that enable the chips to have a fat pipe between each other without having to worry the problems current die shrinks face or am I mistaken.

The main economical issue will be getting the conglomerate die size under a certain size. Sure there are benefits to manufacturing a single die, but at the same it wouldn't be great if yield were crap. And of course, there is thermal density to consider. So it'd be a tough call as next-gen will likely start at 32/28nm and the roadmap beyond 22nm is rather sketchy...
 
If your comment was in regards to "custom" non-desktop parts, then there is no reason AMD cannot offer such. Bulldozer is specifically designed for flexible SKU production and ATI has already shown capable of creating additional GPU SKUs. Then again, there could be an advantage to AMD developing a GPU chip that would find itself in PCs.

If AMD wants to see their silicon inside of the Xbox720 they will have to build a custom CPU indeed. IIRC all the code running on the xbox is encrypted, shaders running on the GPU are not but I imagine MS will want them to be for the next gen hardware as well. So regardless of whether MS decides to go with an x86 arch this time the CPU has to be custom built.

OTOH if you are going to order 20 million CPUs I don't think AMD will have a problem building a custom part for you, just like they did for the GPU.
 
If AMD wants to see their silicon inside of the Xbox720 they will have to build a custom CPU indeed. IIRC all the code running on the xbox is encrypted, shaders running on the GPU are not but I imagine MS will want them to be for the next gen hardware as well. So regardless of whether MS decides to go with an x86 arch this time the CPU has to be custom built.

OTOH if you are going to order 20 million CPUs I don't think AMD will have a problem building a custom part for you, just like they did for the GPU.

But you're not getting an order for 20 or 50 mil CPU's, you're getting a CPU design contract with possibly some small royalties on a limited allotment.

That said, the only way imho they can even win that design contract is with the interconnect between the CPU and GPU. Otherwise, they just can't compete against with a Power7 variant unless they're willing to do it for free or close to it.
 
Back
Top