AMD/ATI for Xbox Next?

I'd wonder how adverse AMD would be to modifying one of their CPUs to suit MS's needs (considering their own apparent issues with their PC designs). I mean similarly to what happened with Xenon. i.e. extended vector processing, hardware security measures, cache locking by GPU, implications with the GPU handling the northbridge/memory control... Does AMD have the resources for such a side-project to do a good job...
Well I think that they would not modify one of their CPU. It looks like they still don't a nomenclature alike Intel one about core and un-core part of their CPUs. Basically AMD uses the same 'core' on most of their product. It would be more about chose how many cores they want I don't expect AMD to do any changes for MS. If Ms work with them they will have what AMD have most likely bulldozer cores. Ms may have its words on L2 cache size and the number of cores, mem controler etc (uncore part) but that's it. I don't expect AMD to change (or to be in a situation where they can afford to do it) SSE5 instructions and width of this units, number of execution units, etc. But it's not a bad thing per SE even if AMD can't match Intel they still do go CPU.
Bulldozer will have to deal with GPU on die so AMD will do changes to make that convenient but I don't expect them to change one bit for MS. Once again it's not that bad as actually AMD and ATI may have at least as many clues about what they are doing as Ms if not more.
When IBM did the xenon for MS it was a 'tiny' team that did the job while somehow 'stealing' advancements from a bigger team... Anyway AMD will have bulldozer, bulldozer is supposed to be their building block from low to high end supporting fusion/GPU somewhere in the middle. It's a 'less from scratch' job than what the IBM team did for MS (even with some help).
I think AMD could do that.


I've not been particularly impressed by their power consumption in the desktop space, nor their cache density, but comparing that to what IBM can do would be very difficult to say...
Just some concerns...
Their architecture show its age and they push frequencies to stay on Intel tail. If you add less performant process... but I don't think IBM as any advantage in this regard.
I still wonder if AMD/GF has an agreement about on die edram and if yes if it will be available on a process that could serve well a fusion design. That would allow for some nice wins in density :)
Intel did a great job on its core architecture and then on their nehalem (even if actually games can suffer fron tinier L2) but AMD has improved a lt since their first phenom and 45nm disaster.
IBM has mostly give up at fight X86 at what they do (which is a bit of everything lol), MS (and others) went to IBM for the business side of things imho. Now AMD has fusion coming, still need money, dare to face Intel and go almost fab-less who knows they may let out a X86 cpu at reasonable price if packed with GPU, if that grant them an advantage in the discrete GPU market ( having a full close AMD box may benefit them way more than having orphan chip (xenos) in the 360 or having a TWIMTBP program) it will be welcome. The advantage could also be on the CPU side game optimized for their SSE5 instructions (might be a reason why Intel will make its most to prevent that).
Anyway I would not discard them on technical merit because Intel greatness (which has been proven true a lot lately). Overall I feel the opposite it would be more does AMD have intensive to sell for pretty cheap what will the second best CPU (or better who knows, not much hope tho) in the business and in regard to the gpu a part that give you the more perf per dollars (it's now and may change but ATI has some serious expertise). They are really in need for money tho==> who knows ;)

EDIT
In regard to CPU power consumption I would add that AMD is likely to add same "functionalities" as Intel in their next core say variable clock speeds from turbo mode to power saving mode depending on the workload, that's to take in account while speaking about power consumption.
 
Last edited by a moderator:
i think that at the moment amd hasn't a free guy even to make cofee, due to the race to match intel on desktop and mobile
bulldozer will be great or will be the last architecture
 
On the flip side bulldozer in the xbox would get them at least 30m chips sold , with an ati gpu amd could be looking at 60m chips sold over 5 years or even more chips. That would go a long way to keeping AMD around.
 
Meh, just put a quad core ARM Cortex A9 with NEON256, clock it at 3GHz and you are good to go :LOL:

Highly power efficient, great at SIMD, oooe, and you are free to add whatever you want to add..
 
Meh, just put a quad core ARM Cortex A9 with NEON256, clock it at 3GHz and you are good to go :LOL:

Highly power efficient, great at SIMD, oooe, and you are free to add whatever you want to add..
That would pretty neat indeed along with a sea of tinier ARM cores also tied to SIMD units. :)

Anyway I think more about the advantages of a reasonable sized fusion chip and I think about something I didn't take in account earlier, Natal.
It's pretty clear that the thing will ship with Natal2, I expect Natal2 to do what Natal supposedly can't ie dealing with higher resolution "points cloud" and be able to track hand postures, allow almost one on one facial motion mapping, do so so with less lag . So basically the peripheral should not be forget while discussing what the main hardware will looks like. Natal 2 will cost more as it needs higher resolution camera and more processing resources to achieves its goals.
Overall I think that this point should push Ms to strongly consider a single chip system, saving are neat, one cooling system, one bus simple mobo, easier a have a tinier/sexier box etc.
 
Next consoles will last until 2017-2018. They need to be really future proof. Where is heading the rendering technology? There is still the need for a CPU? Couldn't a GPGPU design do both jobs? I believe next console will need to pack more processing power they can, and that the right design to do it, it's dropping all fixed function units. A 200mm^2@28nm GPU could pack 3.5 B+ transistors, plenty enough to deliver about 5 TF of processing power. In their SIGGRAPH09 presentations, Epics was talking about a NEED of Terabyte/s of external bandwidth for future graphic engine. Is there anyway to deliver this amount of bw? Even with a 512bit bus and XDR2@12Ghz isn't not possible!
 
Next consoles will last until 2017-2018. They need to be really future proof. Where is heading the rendering technology? There is still the need for a CPU? Couldn't a GPGPU design do both jobs? I believe next console will need to pack more processing power they can, and that the right design to do it, it's dropping all fixed function units. A 200mm^2@28nm GPU could pack 3.5 B+ transistors, plenty enough to deliver about 5 TF of processing power. In their SIGGRAPH09 presentations, Epics was talking about a NEED of Terabyte/s of external bandwidth for future graphic engine. Is there anyway to deliver this amount of bw? Even with a 512bit bus and XDR2@12Ghz isn't not possible!
EPIC said:
In 2012, a 4 Teraflop processor would execute:16000 operations per pixel
at 1920x1080, 60 Hz
EPIC said:
Effective bandwidth demands will be huge
Typically read 1 byte of memory per FLOP
4 TFLOP of computing powerdemands4 TBPS of effective memory bandwidth!
You speak of that figure right?
I think you have consider this part of the slide too:
Software Tiled Rendering

Split the frame buffer up into bins
Example: 1 bin = 8x8 pixels
Process one bin at a time
Transform, rasterizeall objects in the bin

Consider
Cache efficiency
Deep frame buffers, antialiasing
To me he (Tim Sweeney) is speaking of internal bandwidth and it's the "ideal" figure. In GPU case this bandwidth can be provide via L1, local store and registers, I'm unsure of the figures (read something about new 58xx) but GPU are farther from that figure than from 4TFlops.
 
sure, with latencies measured in minutes :devilish:
Latencies will likely be the same as today, but hey they are bad as it is today.

I think hardware multi-threading is here to stay, I think that is the only efficient way to deal with cache misses and not stalling CPU cores. The hard coded way to deal with latencies in the SPUs (using double or triple buffering) will probably be unique to Cell, I don´t expect any new CPU to go that route.

TBI was impressive as hell, I think it will help Moores law to still apply for many years to come
 
Last edited by a moderator:
I am hoping for AMD to deliver both CPU and GPU for the new Xbox. It would allow for so many possibilities it just makes me excited thinking about it. I'm unsure how they'd be able to do backwards compatibility without brute forcing emulation of the CPU. They could always do what Apple did with OSX and set up an emulation layer for PPE code to run on x86 hardware. I know there's more to it than that and the costs involved may be too high.

This thread has been a great read so far.

My main concern for AMD regarding this is are they the ones fronting the R&D or will MS pay for it?
 
I am hoping for AMD to deliver both CPU and GPU for the new Xbox. It would allow for so many possibilities it just makes me excited thinking about it. I'm unsure how they'd be able to do backwards compatibility without brute forcing emulation of the CPU. They could always do what Apple did with OSX and set up an emulation layer for PPE code to run on x86 hardware. I know there's more to it than that and the costs involved may be too high.

This thread has been a great read so far.

My main concern for AMD regarding this is are they the ones fronting the R&D or will MS pay for it?
As somebody else pointed I'm not sure AMD is in a situation where they can fond extra R&D, I think Ms may have to come with quiet some money and work force of its own is this is to happen (AMD fusion chip).
 
Well, in the end, the most important factor will be how much power can be packed in, in a box that will be sold for tops 399 dollars and maybe even close to breaking even at that price.

Nintendo wii and the whole financial crisis has showed that it is even more important to sell these things at close to breaking even (or making profit will sales of games/peripherals/services) than ever before.

Because selling hardware at a loss and hope that software sales will help to offset this, is not a guarantee anymore, so I think that MS would like to address this issue as well.

And once again, i would think that MS would like to have an advantage with being first out, or at least before Sony, to establish itself as the lead platform. These two factors are high on MS agenda I would think...

I dont know how much MS will want to push the graphical envelope, but with Natal incoming and these kinds of experiences, perhaps the push will not be as aggressive as it has been before. I once thought that if MS would have a box with something along a quad-core PhenomII with a new ATI gpu (the latest that they have now), for a 2012 timeframe could perhaps be able to be made at a reasonable price, hitting the 399 dollar pricepoint with close to breaking even.

The machine would still be powerful, but reasonbly cheap and easy to program for. If anything, it would be nice if MS focuses on having a built in storage solution and perhaps also sufficient amount of ram for the kind of worlds developers would like to create...

Like I said, Nintendos Wii did change the climate, that perhaps you dont need to have the latest tech to sell... but hopefully, because MS also does lots of gfx research, they would like to cater all gamers/markets.. those who like great gfx, and for those who dont care so much as long as the immersion is great (like with a peripheral like natal, if this delivers)...
 
Just wanted to add this:
image.php


In order to perhaps reach the 399 price point and be close to breaking even, a modified "dragon" plattform could fit the bill, but with the addition of a newer gfx card.
 
a noob question

An ipotetically amd nextcpu needs to have at least 6 cores, other than magic, to be BC with the actualbox?

If so, i can't see it happening easely due to amd thermals...
 
owning both an I7 at 3.8 :( (C0 stepping) and an X4 at 4.0 what issues with thermals? my I7 runs so much hotter.

dont trust TPD's i have found generally intel runs hotter at a given TPD.

a dragon based platform could work, but there is heaps of crap in the north and south bridges you wouldn't need. if you want unified memory and are going to use the memory controller of a K10 then i could see bandwidth issues. maybe you could have some funky NUMA between CPU and GPU.
 
a noob question

An hypothetically amd next cpu needs to have at least 6 cores, other than magic, to be BC with the actualbox?

If so, i can't see it happening easely due to amd thermals...
Basically... I've no clue :LOL: You're concern by the number of hardware threads right?
I don't what could be achieved by emulation in this regard. If we speak of Simd only three core (not multi threaded) would be enough as thess unit are ot multithreaded in xenon if my memory serves right.
There are a lot of unproven talk about bulldozer, some speak about AMD about to support some form of multi threading what they call "cluster based multi threading". The all thing is unclear and in fact nobody have a real clue about what bulldozer is going to look like in the end.
About cluster based multi threading:
3663732_9bc35365d1_l.png

From what I read it would be about to double your execution resources instead of having threads sharing the same resources.
Could like that (* important note the speculated on top ;) )
Bulldozer_Core_uArch_0.5b.png

Bulldozer may end bigger than phenom, may be the odds are that Ms will end again with an odd number of cores (say three). At any case this is more rawx speculation than anything else.
A thread about bulldozer is available here.
 
You speak of that figure right?
I think you have consider this part of the slide too:
To me he (Tim Sweeney) is speaking of internal bandwidth and it's the "ideal" figure. In GPU case this bandwidth can be provide via L1, local store and registers, I'm unsure of the figures (read something about new 58xx) but GPU are farther from that figure than from 4TFlops.


http://graphics.cs.williams.edu/archive/SweeneyHPG2009/TimHPG2009.pdf
The last slide clearly says that he is talking about memory bandwidth.
Anyway, RV870 has 1 Terabyte/s bw for texture fetch and 435 Gb/s bw from L1 to L2 cache.
 
Ultimately with next gen, if we are to make educated guestimates, we would have to layout what we thought their market approach is. e.g. A focus on BC or on 3D makes a big impact on the system design.

If MS were to focus on 3D Displays it would seem that this would open up an opportunity to leverage the yield advantage of 2 smaller GPUs instead of 1 larger GPU. As someone who wears glasses I am not so hip on 3D displays at this point, but if MS were to pitch a new console with 3D gaming (3D output, 3D Motion Controller+Camera) they very well could be targeting the mythical "casual" market.

Of course they may not go the 3D route, as the market may not be ready in 2012. Or that consumers get the impression that you *need* a 3D TV and thus cut off your userbase. So if they have 3D as a 2ndary feature they could go a different direction.
 
Back
Top