Predict: The Next Generation Console Tech

Status
Not open for further replies.
Got link to the statement by Carmack by any chance? If it was just saying you get more juice out of closed box, I can't understand how PC gamers (which I'm one of, too) here would have raised hell about it - however there's more ways of saying that, which of some could indeed raise a hell.

I can't say with certainty which Carmack comment got Nebula (who is normally pro developer and measured and thoughtful in his posts) so upset here:

http://forum.beyond3d.com/showpost.php?p=1543703&postcount=76

but it might be this one (bolding is mine) ...

The J. C. said:
Last I heard, Nvidia was going to be providing OpenGL for the X-Box. If they do, we will probably do some simultanious development for X-Box. If not, it would have to wait until after the game ships to be ported.

The X-Box specs put it as a larger leap over PSX2 than PSX2 is over Dreamcast, but anyone with sense can see that by the time it ships, the hardcore gaming PC will already be a generation ahead of it in raw power.

The X-Box should be able to keep up for a while, because you can usually expect to get about twice the performance out of a fixed platform as you would when shooting for the broad PC space, just because you can code much more specifically.

I don't have much of a personal stake in it, but I am pulling for the X-Box. If you need to pick a feudal lord in the console market, I would take microsoft over sony/sega/nintendo any day.

John Carmack

...or one of the others like it he's made.

Just to mention one of my own performance related findings, regarding the Mafia 2 benchmark. On the same multiboot PC (same driver revisions on all OS installs) with the CPU as the bottleneck I got the following results:
Opteron 170 @ 2.5 gHz, 4GB DDR1 @ (about) 416 mHz,
42 fps - XP
37/38 fps - Vista 32 bit
32 fps - Windows 7 64-bit

Given that Mafia 2 is a 32 bit exe I can only assume that the performance tanking is due to the assfest that is CPU PhysX and some kind of 64-bit PhysX catastrophe (tried different versions of PhysX drivers, all just as bad). And yep the results were completely repeatable. I could get a greater than 30% performance increase though switching back to XP!

Win 7 32-bit performed exactly as Vista 32-bit btw.
 
Care to elaborate with a source?
Currently there's nothing about the cpu's programming model/featureset publicly available outside of gcc's codebase. But from there:

http://gcc.gnu.org/viewcvs/trunk/gcc/config/rs6000/rs6000.c?view=markup

Check the processor_target_table entry for a2 (line 1786 or thereabout - this is trunk so changes may occur between now and the time you read it). Then check the entry for power7 (line 1830 or thereabout). Note the absence of VSX from a2.

Power7 isn't really an option, though.
Why not?
 
I can't say with certainty which Carmack comment got Nebula (who is normally pro developer and measured and thoughtful in his posts) .
Nebula is not pro. and i could disagree with the second part also.
Carmack's take on this sounds about right : 25-35% Api overhead plus single plateform developpement could lead to twice efficiency on a targeted hardware specs.

Taking cysis 2 as a counter exemple is pretty ill-inspired as it's mostly pc tech.
 
Taking cysis 2 as a counter exemple is pretty ill-inspired as it's mostly pc tech.

How is that? Imagine Crysis 2 running on a pc equipped with a 7800GTX or thereabouts with a good CPU... it won't be playable. Not matter what settings. Same should be true for a XeGPU equipped PC. My 8800GTS 320 (which was bought a week before PS3 hit in Europe) runs Crysis "well"... but it's also a HUGE GPU...
 
The development of Crysis 2 was far from ideal from what I read. CryTek started by porting across core code, and then trying to optimise it. Instead a 'proper' console engine would have been designed from the ground up for the platforms with an eye on processing resources and memory limitations. Crysis 2 is in effect a PC port, shoehorning PC thinking onto the consoles. And this shouldn't come as a surprise considering this is CryTek's first ever console game! That's the way everyone does it. That's how this generations started, with games gotten up and running first on PS3's PPU, and then bits branched out onto SPUs as they could, with developers learning the system and changing development practices over time.

It will be very interesting how that does or doesn't change next-gen. At the moment API hasn't as much of a hit as system architecture differences. If next-gen systems as very similar to PC architectures, the API will be the only differentiation between them and PCs.
 
I believe this give a good idea of what could be a "low" power GPU for the Wii2, still I hope N went with something a bit more powerful.
 
How is that? Imagine Crysis 2 running on a pc equipped with a 7800GTX or thereabouts with a good CPU... it won't be playable. Not matter what settings.

That's a poor comparison though as PS3 isn't a 7800GTX + Good CPU. We all know the GPU gets a huge leg up from Cell so comparing directly to a PC GPU which recieves no CPU help in rendering work makes no sense. Besides, how do you know it would be unplayable? At low settings and sub-HD it may well be able to hang in there at near 30fps levels. Not that I'm saying it would, afterall the architecture is so old that the game hasn't even been formally validated to run on it on PC. Hence the level of optimisation it would have recieved would be zero as opposed to whatever modern PC GPU archtecures get.

Same should be true for a XeGPU equipped PC. My 8800GTS 320 (which was bought a week before PS3 hit in Europe) runs Crysis "well"... but it's also a HUGE GPU...

Hard to judge. The best comparison point would be something between an HD2600XT and HD2900Pro. It would make for an interesting benchmark anyway.

I actually do agree with you that Crysis 2 makes no worse an example than many games. I also agree with Carmack that at a very high level it is possible to get roughly twice the performance from a fixed platform compared with a PC. But there are tons of caveats around that which need to be considered. e.g. the level of optimisation on each platform, it will vary from no performance advantage to much greater than double depending how well both platforms are optimised. Obviously in theory it's always possible to extract close to console performance out of any PC architecture provided the developers put enough effort in so the 2x figure is probably only assuming a "normal/average" amount of optimisation on the PC and a high level of optimiation on the consoles (which is normal).

I also think in some specific areas you will not get double the performance even in a "normally" optimised game. Rendering resolution superiority for example will always be aparent. Take the 8800GTS for example. It can be roughly taken to be twice as powerful as Xenos. And yet it will run virtually any multiplatform game at similar or better settings and framerate at a higher resolution to the 360.
 
We all know the GPU gets a huge leg up from Cell so comparing directly to a PC GPU which recieves no CPU help in rendering work makes no sense.

Actually in Crysis 2 the Cell's involvement is quite limited. The performance in Crysis 2 benefits much more from low-level programming and clever algorhithms than anything else, imho.
 
I believe this give a good idea of what could be a "low" power GPU for the Wii2, still I hope N went with something a bit more powerful.

Beside of the low number of texture units this wouldn't be half bad. I think we can expect something in this range. Also the E4690 shrinked to 40nm or 28nm and with eDRAM could be a possibility if they don't need DX11 features (and goes along with the R700 rumors).
 
I believe this give a good idea of what could be a "low" power GPU for the Wii2, still I hope N went with something a bit more powerful.

If it's launching in 2012 theres a pretty decent chance it'll be 28nm as there are two potential suppliers of that node and AMD are familiar in dealing with both of them. Well I hope they'll be launching on 28nm as they seem loathe to do process updates so hence I would like to see them at least start with the latest one. So at the very least we could probably expect better power/performance than 40nm parts we're used to!
 
Also up to 6 displays with Eyefinity is interesting in regards of the touch controller rumors.

I guess this would be at 25-30W max. @28nm, not to shabby (I think Nintendo will not go over 80-90W for the whole system)
 
Also up to 6 displays with Eyefinity is interesting in regards of the touch controller rumors.

Yes, the superior (or simply proven and tested) multi-display control hardware embeded in AMD's latest GPUs should be a preferencial factor over nVidia\PowerVR\DMP\others if Nintendo wants to have the controller to display something while doing 1080p rendering in the TV.

Then again, rumours converge to a R7xx chip, and Eyefinity only appeared with Evergreen.

I guess this would be at 25-30W max. @28nm, not to shabby (I think Nintendo will not go over 80-90W for the whole system)

35W is the TDP for 600MHz Turks @ 40nm + 4*2Gbit 800MHz GDDR5 chips.
Taking away the memory, I guess a supposed Turks shrink to 28nm should consume closer to 15W.
 
Yes, the superior (or simply proven and tested) multi-display control hardware embeded in AMD's latest GPUs should be a preferencial factor over nVidia\PowerVR\DMP\others if Nintendo wants to have the controller to display something while doing 1080p rendering in the TV.

Then again, rumours converge to a R7xx chip, and Eyefinity only appeared with Evergreen.

Correct, but otherwise it would be possible that they were just using the R7xx chip in the prototype because the Evergreen chip wasn't ready. Or they could just update the R7xx design with Eyefinity (it will most probably be a custom design anyway because of memory).

35W is the TDP for 600MHz Turks @ 40nm + 4*2Gbit 800MHz GDDR5 chips.
Taking away the memory, I guess a supposed Turks shrink to 28nm should consume closer to 15W.

Yes, but I guess Cafe will have memory too :D

But as I said before, I would really consider these specs to be close to the real deal in the end.
 
Last edited by a moderator:
Also up to 6 displays with Eyefinity is interesting in regards of the touch controller rumors.

not sure, eyefinity is more about physical outputs, eyefinity itself is done by rendering to a big framebuffer (something that may be possible on very old GPU), multiple render targets which is what you may need here is supported on a R300.
 
If it's launching in 2012 theres a pretty decent chance it'll be 28nm as there are two potential suppliers of that node and AMD are familiar in dealing with both of them. Well I hope they'll be launching on 28nm as they seem loathe to do process updates so hence I would like to see them at least start with the latest one. So at the very least we could probably expect better power/performance than 40nm parts we're used to!
Hum I'm not convinced, TSMC and GF may be capacity limited in 2012 (32/28nm process early days). There will be mobile manufacturers, AMD, NVidia that's already quiet some wafers to be "reserved".
I hope N went with something better than say a 4670 (so 400SP) if not a 4770 at least six arrays (so 600sp). I'm actually pretty interested by the concept (especially if the system offers good multimedia capability / is able to run a proper OS).

All together I believe it could be possible to stick 600Sp and a xenon"+" on the same die all this die though a 128bits to 2GB of GDDR5 running around 1GHz. It would crush nowadays systems and N once they make the move to 32nm could fought back future MS/Sony on price pretty easily.
 
Last edited by a moderator:
Hum I'm not convinced, TSMC and GF may be capacity limited in 2012 (32/28nm process early days). There will be mobile manufacturers, AMD, NVidia that's already quiet some wafers to be "reserved".

Well I hate to spoil it for you but...

Since there are fabs for all companies involved to source from, chances are pretty good that with recent capacity expansion from GloFo coming on stream in time for the 28nm process node I think they ought to have more than enough wafers available. If they are indeed supply limited it'll probably be the screen especially if the rumoured tactile feedback is correct. See http://en.wikipedia.org/wiki/GlobalFoundries for Fab 8's expected completion in 2012.


I hope N went with something better than say a 4670 (so 400SP) if not a 4770 at least six arrays (so 600sp). I'm actually pretty interested by the concept (especially if the system offers good multimedia capability / is able to run a proper OS).

Well one has to assume they aren't completely brain dead... Anyway if they are we can always console ourselves with the fact that we could expect either Sony or Microsoft to at least provide decent hardware.

All together I believe it could be possible to stick 600Sp and a xenon"+" on the same die all this die though a 128bits to 2GB of GDDR5 running around 1GHz. It would crush nowadays systems and N once they make the move to 32nm could fought back future MS/Sony on price pretty easily.

Im sure that if they aren't launching on 28nm they'd at least be launching on 32nm since that process would be more than mature enough by the time they started mass production. They should be able to source at minimum enough 32nm wafers by the middle of 2012 so I don't think 40nm is a likely candidate especially as GloFo doesn't offer 40nm and TSMC is not an IBM alliance fabr which makes porting PPC a lot more difficult if it is even possible for various legal or technical reasons.
 
Well I hate to spoil it for you but...

Since there are fabs for all companies involved to source from, chances are pretty good that with recent capacity expansion from GloFo coming on stream in time for the 28nm process node I think they ought to have more than enough wafers available. If they are indeed supply limited it'll probably be the screen especially if the rumoured tactile feedback is correct. See http://en.wikipedia.org/wiki/GlobalFoundries for Fab 8's expected completion in 2012.
That would be nice.



Im sure that if they aren't launching on 28nm they'd at least be launching on 32nm since that process would be more than mature enough by the time they started mass production. They should be able to source at minimum enough 32nm wafers by the middle of 2012 so I don't think 40nm is a likely candidate especially as GloFo doesn't offer 40nm and TSMC is not an IBM alliance fabr which makes porting PPC a lot more difficult if it is even possible for various legal or technical reasons.
As I still hope for a single chip I was actually thinking of 45 SOI process used by MS for its last chip.
If they go for a single chip actually I can't see them using the 28nm, like AMD for Llano they should go with GF 32 nm HKMG process.
----------------------------------------------
Live edit :LOL:

I just read that while posting:
http://www.hardware.fr/news/11480/l-igp-llano-s-appelera.html
228mm² ~1billion transistors.
Clearly considering rumours we got and if you're right about GF capacity, a 32nm may make it to the market. Actually going by AMD figure and if N uses a Xenon derivated (3 cores vs 4 in Llano, mostly likely lighter cores, less cache) they should easily be able to pack more than 400sp into the chip. Knowing big N the idea of shipping something of the same size as the HD4770 (~180mm²) is all too tempting.

EDIT
Not that easily HD4770 is worse 800 millons of transistors. Still they may have some extra room depending on their budget. A 180mm² chip would already be quiet a big move for N looking at their past record.
 
Last edited by a moderator:
Sorry all for coming back to this theme... i have heard many things lately...but the Sony would still be thinking in Ray-tracing paradign ? And with a customized PowerVR series 6 (16 to 32 cores at 28nm@ 450MHz with less than 60 watts) with some "CausticThree" on die as Graphics Processor?
 
are we discussing an AMD fusion chip made of a PowerPC CPU and an AMD GPU?
that would be a bold move.
it looks contrary to AMD's strategy, as they declared lately they want to use x86 everywhere on their APU, though in the context of tablets and smartphones.

if it's reliable, IBM CPU and AMD GPU would point at separate chips (that allows something like 1GB 64bit ddr3 + 1GB 128bit gddr5 and separate processes)
 
are we discussing an AMD fusion chip made of a PowerPC CPU and an AMD GPU?
that would be a bold move.
it looks contrary to AMD's strategy, as they declared lately they want to use x86 everywhere on their APU, though in the context of tablets and smartphones.

if it's reliable, IBM CPU and AMD GPU would point at separate chips (that allows something like 1GB 64bit ddr3 + 1GB 128bit gddr5 and separate processes)
No we're discussing an PPC stuck on the same die as an AMD GPU, which is not an AMD fusionchip :LOL:
It don't think AMD would refuse to make money out of an old design like R7xx. Ms did it with their last chip for the 360s I see nothing preventing N to do the same.
 
Status
Not open for further replies.
Back
Top