Predict: The Next Generation Console Tech

Status
Not open for further replies.
Does the GPU transistors include Edram?

Hmm, no I didn't factor eDRAM in. Its kind of a choice between more GPU or eDRAM, if you want everything on-die on a SoC. If they use fast enough memory, high end GDDR 5 with a 256 bit bus maybe, they might get away without eDRAM. With eDRAM, depending on the size, I would guess 64MB is a good balance, thats ~550m transistors, probably chopped right off the GPU. Which would make your GPU at 1536 shader/24 CU part probably, though you could get away with less ROPS with eDRAM probably.
 
"I think it’s very important that a gamer sees an Xbox Next or PlayStation Next and can clearly see the tech is not possible on current consoles."

How about tech which isn't possible on current PCs?

What does that matter, thats not the audience they are targeting with a new console. They are targeting the people that currently have a 360/ps3, to upgrade from that to something new. I see the problem though, from the developer POV, look at GoW3 and then at that Samaritan demo Epic made. The average consumer probably doesn't see much difference between them, even though one is running on hardware 15-20 times as powerful. Certainly doesn't look 20 times better.
 
Hmm, no I didn't factor eDRAM in. Its kind of a choice between more GPU or eDRAM, if you want everything on-die on a SoC. If they use fast enough memory, high end GDDR 5 with a 256 bit bus maybe, they might get away without eDRAM. With eDRAM, depending on the size, I would guess 64MB is a good balance, thats ~550m transistors, probably chopped right off the GPU. Which would make your GPU at 1536 shader/24 CU part probably, though you could get away with less ROPS with eDRAM probably.

So we have 500 million transistors for CPU that factors translates into ~60 gflops.
Power7 has ~900 million transistors without L3.
That means our derivative CPU has half the threads per core, or half the total amount of cores of a power7

3 billion transistors for GPU that's about 3 teraflops.
550 million for Edram that boosts the performance of booth.
In totally we're looking at around 3.0+ teraflops of computing power for the SOC design.

VS Discrete

Full power7 CPU
200m^2
1.2 billion transistors
120 gflops

Discrete GPU
300m^2
5 billion transistors
5 teraflops
In total we're looking at around 5.2 teraflops of computing power for non-SOC design.

The discrete version has 1.5-2x performance of the SOC version, the question is if the discrete version costs at most 1.5-2x as much. Looking the total numbers, I can see why MS would choose to go SOC. 50-100% increase in flops isn't going to make much difference at all next generation. The extra budget can go towards more ram.

If Wii-U can have 1-1.5 teraflops of computing power and enough memory, it would be quite competitive with the next xbox.
 
Last edited by a moderator:
What does that matter, thats not the audience they are targeting with a new console. They are targeting the people that currently have a 360/ps3, to upgrade from that to something new. I see the problem though, from the developer POV, look at GoW3 and then at that Samaritan demo Epic made. The average consumer probably doesn't see much difference between them, even though one is running on hardware 15-20 times as powerful. Certainly doesn't look 20 times better.

Possibly just a bad demo imo. I was not too impressed with Samaritan, disappointing since the old UE3 demos really blew me away at the time. Anyways, do current gen games look "10X better" than last gen despite 10x power? That's really hard to quantify. Whatever the case it's clearly a major improvement.
 
Possibly just a bad demo imo. I was not too impressed with Samaritan, disappointing since the old UE3 demos really blew me away at the time. Anyways, do current gen games look "10X better" than last gen despite 10x power? That's really hard to quantify. Whatever the case it's clearly a major improvement.

I donno, personally looking back at some of the crappy textures and blocky models from PS2, I could see calling PS3 stuff a 10x improvement. Of course, if you want to get down to raw computing power, PS3 is probably 50x the power of PS2 to show that 10x improvement. With diminishing returns on power/visuals I can see it taking 100x+ the processing power to show the same 10x improvement. Honestly I don't see them hitting more than 15-20x the processing power with the constraints they have. Seems to me that the best area to improve would be productivity, easier development, and allowing devs to get more work/graphics/code/ect done in the same amount of time. Devs spending a couple years to make short games that look great, how much of that is time wasted tweaking and tuning code for system intricacies and bottlenecks?

Carmack might have had it somewhat right with the idea behind Rage, although the first implementation is pretty fail. Making a engine where you can just let art and idea guys just go to town without worrying about the tech side of things, seems like if you perfected it you could really get unique stuff done quickly. I wonder if one of the next gen consoles will take that lesson to heart on the hardware side of things. I'm probably rambling here...
 
Seems to me more like a hint "we were successful getting our desires adopted". Maybe it means more RAM like last time 4GB --->8GB :p

Everything I've heard from Epic sounds to me like they are trying to get MS and maybe Sony to improve the hardware like they did with the memory boost for the 360. No reason to say gamers should demand Samaritan-level games if hardware specs were already capable of that.


Also not really a prediction so to speak, butlherre just said the Wii U CPU is a Tri-Core processor with two threads per core.
 
Everything I've heard from Epic sounds to me like they are trying to get MS and maybe Sony to improve the hardware like they did with the memory boost for the 360.
Although that story makes for a great internet anecdote, it's best not to afford it any substantial merit. We have one PR remark suggesting that ws the, and it's highly doubtful Epic were the only devs wanting more RAM, nor that MS would listen to them more than anyone else.
 
Also not really a prediction so to speak, butlherre just said the Wii U CPU is a Tri-Core processor with two threads per core.

Interesting, got a link to that? Sounds like they might have a updated/modified version of Xenon? I guess that makes porting easy. What about the hinting of it being Power7 based though? Certainly wouldn't be Power7 with only 2 threads, unless its highly modified...
 
Although that story makes for a great internet anecdote, it's best not to afford it any substantial merit. We have one PR remark suggesting that ws the, and it's highly doubtful Epic were the only devs wanting more RAM, nor that MS would listen to them more than anyone else.

I'm pretty sure that story has been validated by Epic themselves many times. Of course, I dont feel like searching for a link right this sec :p Maybe I'll edit one in later.

Edit 1: Ahh, I see you're speaking in more general terms, which you may well be right then.The story was that Epic actually created a demo of UE3 with 256 vs 512 to show MS the marked difference, which is pretty involved. But yes, reasonable to assume it wasn't specifically Epic.
 
I donno, personally looking back at some of the crappy textures and blocky models from PS2, I could see calling PS3 stuff a 10x improvement. Of course, if you want to get down to raw computing power, PS3 is probably 50x the power of PS2 to show that 10x improvement. With diminishing returns on power/visuals I can see it taking 100x+ the processing power to show the same 10x improvement. Honestly I don't see them hitting more than 15-20x the processing power with the constraints they have. Seems to me that the best area to improve would be productivity, easier development, and allowing devs to get more work/graphics/code/ect done in the same amount of time. Devs spending a couple years to make short games that look great, how much of that is time wasted tweaking and tuning code for system intricacies and bottlenecks?

Carmack might have had it somewhat right with the idea behind Rage, although the first implementation is pretty fail. Making a engine where you can just let art and idea guys just go to town without worrying about the tech side of things, seems like if you perfected it you could really get unique stuff done quickly. I wonder if one of the next gen consoles will take that lesson to heart on the hardware side of things. I'm probably rambling here...

It's hard to quantify a "x" improvement, but you can do some rough things. For example, quantity of RAM is a good basic one. PS360=512, PS2=32, Xbox=64. So 8x-16x improvement there. Number of transistors on the GPU (CPU also) would be another good one. Seems NV2A had ~60m, where RSX has around 300m. So 5X there. Pixel pipes (assuming they are equally capable which is false) 4 on NV2A vs 24 on RSX, so 6X there...plus a doubling of clock speed=12X.

It seems to be in general closer to an order of magnitude, than not. I'd say 10-20X.
 
It's hard to quantify a "x" improvement, but you can do some rough things. For example, quantity of RAM is a good basic one. PS360=512, PS2=32, Xbox=64. So 8x-16x improvement there. Number of transistors on the GPU (CPU also) would be another good one. Seems NV2A had ~60m, where RSX has around 300m. So 5X there. Pixel pipes (assuming they are equally capable which is false) 4 on NV2A vs 24 on RSX, so 6X there...plus a doubling of clock speed=12X.

It seems to be in general closer to an order of magnitude, than not. I'd say 10-20X.

True. Even better, I suppose you could count polygons and texture levels to judge "visual improvement" though that still excludes things like shaders and other effects. A quick eyeballing of the polycount thread vs last gen puts polycount ~8-10 times or so, although there are special cases. Textures you can really just go off of RAM, although that doesn't count virtual textures, which is admitted new. By that count you end up in the same 10x ish range. Its still all a subjective metric though I suppose.
 
"I think it’s very important that a gamer sees an Xbox Next or PlayStation Next and can clearly see the tech is not possible on current consoles."

How about tech which isn't possible on current PCs?
At what cost? I don't care how high you set the console level, I could build a PC which could exceed it today, as long as cost is not an issue. The Windows team has a PC with 640 logical processors and a terabyte of RAM. Slap on a dual AMD HD 6990 and you're good to go.
 
Although that story makes for a great internet anecdote, it's best not to afford it any substantial merit. We have one PR remark suggesting that ws the, and it's highly doubtful Epic were the only devs wanting more RAM, nor that MS would listen to them more than anyone else.

I wouldn't doubt that. I was just referring to it for comparison purposes.

Interesting, got a link to that? Sounds like they might have a updated/modified version of Xenon? I guess that makes porting easy. What about the hinting of it being Power7 based though? Certainly wouldn't be Power7 with only 2 threads, unless its highly modified...

http://www.neogaf.com/forum/showpost.php?p=32184704&postcount=9789

I've felt for awhile the POWER7 influence may have been due to the eDRAM (e.g. the 16MB indicated from the GPU article) which I'm expecting to be L2 cache. Though there may be other things taken from it as well. I do agree that two threads make it sound more Xenon-ish. One thing I noticed is that the press release only said Power. Not PowerPC or POWER, just their general naming. I expect it to be built from the ground up like Xenon and Cell instead of like Gekko and Broadway, but a "Waston-ized" Xenon wouldn't be incomprehensible IMO. Also considering Wii and GC, I would assume it will still be OoO.
 
At what cost? I don't care how high you set the console level, I could build a PC which could exceed it today, as long as cost is not an issue. The Windows team has a PC with 640 logical processors and a terabyte of RAM. Slap on a dual AMD HD 6990 and you're good to go.

Just nostalgic for the time when the PS1, for a time, was better than any consumer product out there for gaming graphics.
 
It's hard to quantify a "x" improvement, but you can do some rough things. For example, quantity of RAM is a good basic one. PS360=512, PS2=32, Xbox=64. So 8x-16x improvement there. Number of transistors on the GPU (CPU also) would be another good one. Seems NV2A had ~60m, where RSX has around 300m. So 5X there. Pixel pipes (assuming they are equally capable which is false) 4 on NV2A vs 24 on RSX, so 6X there...plus a doubling of clock speed=12X.

It seems to be in general closer to an order of magnitude, than not. I'd say 10-20X.

And we should keep in mind that's a 4 or 5 year cycle. For the next generation we're looking at 8-9 years in advances.
 
Just nostalgic for the time when the PS1, for a time, was better than any consumer product out there for gaming graphics.

I get like that about the Dreamcast. It was better than anything - PC, arcade (back when that meant something), and light years beyond any console - but that was back before DirectX was any good, when there were no fans on graphics cards and back before the PC graphics market could support investing outrageous amounts of money in chip development. And the Dreamcast's advanced features were hardly ever used in its short life.

Consoles might not capture the raw performance crown these days but they benefit more from sharing R&D costs with the PC market and being part of a huge and constantly growing common knowledge base of graphics programming. And much though PC gamers bitch and moan about consoles, PC's choked on Crysis and the 360 skipped daintily through it, dancing and giggling. And PC gamers are better off for having people like Crytek know how to make hardware do that.

The current state of affairs works best for everyone, on balance, which I guess is why things are the way the are. For now at any rate.
 
Status
Not open for further replies.
Back
Top