Predict: The Next Generation Console Tech

Status
Not open for further replies.
If consoles are coming in 2013 they won't be on 20nm.

And we don't even know what an 8970 is, let alone its TDP. iirc recent news indicates AMD's new designs will be 28nm though. The 7970 was 365mm^2 on 28nm and is over 200W. It is safe to assume the 8970 will be FASTER, on the same process, which makes it also safe to assume it won't be making dramatic drops in TDP under load.



So maybe the console makers are just going to bail out of chasing a path that leads to small nuclear furnaces in every home ;)

I do not think any consoles will launch in 2013. There were many delays with the last launch of consoles, and I think the delays will be even more significant this time. My guess is that no consoles will launch until 2014 at the earliest.

Why are you calling 350 watt systems small nuclear furnaces? Tons of appliances use more power than 350 watts. 350 watts is not a lot of power.
 
Late 2014 for commercial products at best. And even then, 20nm isn't a holy grail. Below is directly from TSMC webpage on their 20nm and this is likely the best case scenario.

Then I don't think any new consoles except for the Wii will be coming out until late 2014 or 2015. In my opinion, 28nm is simply not good enough for next gen consoles.
 
That would fall into the huge design benefits of more memory: larger worlds, lower post-start upload times, more system features instantly available, less laggy dashboards. I am all for those things (so many devices have such laggy UIs and I was just playing FM4 and I must admit it is a huge turn off with all the load times--I would much prefer a load once, play many approach). But visually 2x GPU performance will allow much prettier pixels--although 2GB vs. 8GB could get ugly for larger game worlds. Like a BF--if one game is fitting all the assets into a full 8GB how do you get all of that to fit into 2GB? Even with a great GPU that sounds like a recipe for some serious pain, especially with the much beloved BDR.

Load screens are the devil. And I think of the marque games are going to have large detailed worlds. BF, esp the 64 players maps push things hard already on a PC. It will only get worst over time. Next gen engines are going 64b and it isn't just for using 64b virtual address spaces. There are a lot of games that are getting memory capacity limited.

And yes 2x gpu gives you advantages, but if you cannot fit the high quality and variety of assets in memory, then it really isn't going to buy you much. Just look at the current BF3 with a large number of maps not being available on consoles because there just isn't anyway to scale assets to fit.
 
But BF3 doesn't use virtual texturing a la Rage, does it? Also, instead of packing in more RAM, how about better texture compression? S3TC is what, 15 years old now? Nearly that, anyway. Surely we can do better today.

2D image compression is pretty well understood now and actually was pretty well understood when S3TC was created. You could get higher compression but then you are going to require a lot more hardware to decompress. And the difference isn't going to be massive.
 
Cteam, can you please start putting links to the quotes you put on B3D? Because that new "rumor" you just put on last page looks like something made up.

Shocking Alberto
http://www.neogaf.com/forum/showpost.php?p=36353477&postcount=9316
They asked publishers and developers what they need in a next-gen system

EA, Activision, DICE, and Epic all insisted on raising the ceiling by what we would traditionally consider a generational leap.

Microsoft listened because they were told this would increase game sales and hardware adoption.
 
Ah yes, that ancient rumor that had already been discussed here when it was still timely (3 months ago). Nothing new here, please move along...
 
I do not think any consoles will launch in 2013. There were many delays with the last launch of consoles, and I think the delays will be even more significant this time. My guess is that no consoles will launch until 2014 at the earliest.

Why are you calling 350 watt systems small nuclear furnaces? Tons of appliances use more power than 350 watts. 350 watts is not a lot of power.

350 watts in a small console box is a huge amount of power. Yes, lots of big things use more power, but they are big. The only small thing I can think of that uses more power than that is a toaster, and its purpose is to cook things. Some mega-high end AV equipment uses that much or more power, but their boxes usually have 3-5 times the volume of a console box, and massive profit margins so they can load up on copper heatsinks.

Its not a power problem, its a power/heat problem in a limited space. Also, a cost problem, sure a awesome all copper heatpipe system could cool a lot, but if the cooler costs $100 you've lost a third of your budget already.
 
It does for the terrain. There are different approaches to virtual texturing though with different goals (e.g. see sebbbi's posts on the Trials Evolution thread).

BF3 also stores all the terrain [texture] data for the environment around you (360 degree FOV) rather than just what you can see (FOV + a little extra). Rage is far more efficient in it's use of memory than BF3, but BF3 lets high dpi mouse users twitch round 180 degrees and not see pop-in. On a console with limited turning speed that's not such an issue.

Fast access to storage would benefit consoles a lot more than great gobs of ram.
 
Last edited by a moderator:
I do not think any consoles will launch in 2013. There were many delays with the last launch of consoles, and I think the delays will be even more significant this time. My guess is that no consoles will launch until 2014 at the earliest.

Why are you calling 350 watt systems small nuclear furnaces? Tons of appliances use more power than 350 watts. 350 watts is not a lot of power.
Not small boxes to sit under your tv that have a £250 bom.
 
but if the cooler costs $100 you've lost a third of your budget already.
$100 would buy you an absolutely MASSIVE cooler in a mass-produced gadget. It'd literally be the size of an entire PS3, or greater still. Don't confuse retail cost of coolers (of which there pretty much aren't any $100 units) with actual BUILD COST. :)

The vapor chamber cooler for the Radeon 5990 or whatever it was called (the dual-GPU Cypress card) was said to be able to cool 400W. Of course it was coupled to a very noisy blower, but it wasn't very large, yet still quite powerful. If the fins were sized up vertically the fan could be made larger and quieter. It wouldn't be very expensive either, a vapor chamber is two plates of copper (or alu) soldered together, fins on one side, a wick of some sort inside which could be simple metal mesh, and a bit of water and/or ammonia as coolant. those things could be banged out mass-production fashion quite cheaply, and be very effective.

That said though, I don't think there's a chance in hell we'll see 350W consoles. Nor would I relish such a thing either, the last generation that pulled upwards of 200W was quite bad enough at heating up my flat. :p
 
Then I don't think any new consoles except for the Wii will be coming out until late 2014 or 2015. In my opinion, 28nm is simply not good enough for next gen consoles.

Why? It's good enough for 10x the current generation consoles within a reasonable console power budget. Is 10x not enough for you? It's been enough for all the previous console generations.

There's no point in looking at the comparison to PC's as when 20nm comes PC GPU's will be using that and still pushing 200-250w.
 
Ok, I will play game: RE: Performance. Is your situation texture bound? Fill rate bound? Shader bound? Or they seeing a 10x improvement on their current code by dropping it in (is this select parts of the code or across the board in every aspect)? Is the situation best case scenario where the best architectural change slows up a lower bound one from Xenos/RSX or is this a worst case scenario where RSX/Xenos were good and this is the least improved aspect of the new design? Or, by golly, is 10x just a nice round number to express the approximate memory increase or how many polygons they can throw on screen at any one time?

You don't know. Hence the dismissal of other possibilities, even if they are unlikely, and the constant *insisting* it must be one thing or another is just mind boggling. You cannot even categorize what performance is as different pressures change.

I don't think it takes much of an imagination to take a design aimed at addressing a specific situation, e.g. fill rate, the Xbox1 had 6.4GB/s of total system bandwidth and fill rate (especially with transparencies) was a big issue. The 360 went a far way to address this with the ROPs on eDRAM. So software that was previously fill rate bound is no longer (peak speed up with worst case scenario was 40x). So it could be legitimate to have software situations where you were fill rate bound and saw gargantuan performance increases but to call it 40x would be inaccurate--unless the person passing along the rumor was specifically looking at such.

That is why 10x performance doesn't mean squat without a proper context, which the rumor lacks. If the rumor is legit we have no clue how many people the data has been filtered through, how they are deriving their metrics, and what they mean.

If you knew for a fact they were talking very specifically about the end result was running today's code 10x faster you would obviously know more than the person passing along the rumor!

I just don't why you continue to insist that it must be "10x" (whatever that means) post-architectural efficiencies. Please tell, can you tell me what GPU is 10x faster than, say, an X1800?

(Hint: are you going to go run to some architectural metrics and guess architectural increases or are you going to look at gaming benchmarks? Which ones? With what features enabled?)

And that really gets to the end point: Show me where we can take a product we know, e.g. X1800 ATI GPU, and show me a 10x faster GPU.

This should be child's play, after all, because you both know the starting HW and have nearly 8 years of GPU releases to find the product that fits the mold.

I will be quite interested what you can choose that is 10x the performance ;)

I think your assigning to much meaning to what I was saying. I'm not trying to analyse every facet of the next generation GPU's from one vague 10x comment. I was simply pointing out that 10x is unlikely to mean a 2.4TFLOP GPU. That's all.

However, since you raise an interesting question of exactly what 10x performance would mean, I'll bite.

Since we're talking consoles here which would fully utilise the capabilities of a GPU were possible, I'd define 10x performance as being "up to 10x performance" with console games hitting the full potential of that "up to 10x" most of the time. In other words, if a GPU has 10x the shader throughput and only 5x the texture throughput, then next gen console games will be built to take advantage of that and thus when running on a last generation console would run 10x slower (because of shader limitations) even though the last generation consoles may be underutilised with regards to their texture resources.

Thus you can't say that current generation games would be 10x faster on the next generation consoles because they've been designed for a different workload. But in that case the next gen GPU's would be heavily underutilised and since that's not how console games work, it's not a valid scenario to consider.

That's why even though the GTX 680 may be 25x more powerful than RSX, you won't actually see a 25x framerate increase on the PC in modern games when comparing between say a 7900GT and the GTX 680. And that's because modern PC games are largely designed to utilise RSX's strengths rather than the GTX680's and all the extra resolution, shading and AA load that can be applied on PC's to use up the extra power is a very inefficient use of those extra resources.

Which brings us to you're X1800 question. I assume you chose that specifically to see how I address the X1900 with it's 3x higher shader throughput with everything else being equal? So I'll answer that first and my answer would be that in a console environment the X1900 may indeed be considered 3x as powerful as the X1800 some of the time.. Obviously there will be times in a game - even one specifically designed to utilise the full shader potential of an X1900 when it is limited not by shader throughput but by other aspects of the chip and in those cases the two would actually perform the same. So I'd say it's important to ensure that you're increasing all parts of the GPU together to ensure you can actually hit the peak throughput of all areas of the GPU as often as possible.

Modern GPU's may have 20x the shader performance and only 10x the texture performance of older generations but if the relative ALU:TEX ratio in modern games has doubled since that older generation of GPU's then 10x increase in texture performance is enough to realise your 20x greater throughput in shaders. I'd suggest that's probably not the case with the X1800/X1900. One or more likely both of those GPU's is probably quite unbalanced. i.e. for the workloads of the day the X1800 is likely shader starved while the X1900 is too shader heavy for the rest of the chips resources.

So 10x the X1800 performance if both GPU's were in a console and thus fully utilised? I'd say something like a 4890 would be in the right ballpark. Accounting for efficiency increases it probably has near 10x the shader throughput with near 5x the texture throughput. Maybe around 50-70%% more ROP performance and ~3x setup capability. If utilised in such a way that it's shader array was fully occupied as often as possible then a game designed specifically for it may run close to 10x slower on the X1800 a lot of the time. And indeed the same game may be around twice as fast on the X1900 where it would be more limited by texturing performance than shaders.
 
What you are saying doesn't make sense at all. FLOPS are used as metric for performance leaps for 2 generations now. Efficiency and new designs are not counted because its impossible to do so. Xbox --> 360 difference in performance was based on FLOPS difference, not design and efficiency difference. Do you think that in comparison to transition from this gen to next one, last one didn't have any architectural efficiency benefits? Because if you think that, than you are obviously wrong.
 
Last edited by a moderator:
Xbox --> 360 difference in performance was based on FLOPS difference, not design and efficiency difference.

Since when? FLOPS wasn't that important of a performance metric back in the GF3/4 days and the PS2 didn't even have programmable shaders so the performance difference between that and PS3 obviously wasn't based on FLOPs.

I wasn't aware there even is a general consensus of how much more powerful the 360 is over the xbox. When it launched I remember 8x being banded around and that's a far cry from the 15x paper flops difference and more in line with the 4.3x pixel / texel fillrate and 4x main memory bandwidth increase + eDRAM and big strides in efficiency.
 
Since when? FLOPS wasn't that important of a performance metric back in the GF3/4 days and the PS2 didn't even have programmable shaders so the performance difference between that and PS3 obviously wasn't based on FLOPs.

I wasn't aware there even is a general consensus of how much more powerful the 360 is over the xbox. When it launched I remember 8x being banded around and that's a far cry from the 15x paper flops difference and more in line with the 4.3x pixel / texel fillrate and 4x main memory bandwidth increase + eDRAM and big strides in efficiency.
...yes, and it was clocked at 233 Mhz, it had 4 pixel and 1 vertex shaders in comparison with 48 unified ALUs on Xenos, its not even close. Difference is bigger than 10x and considarably bigger than something like Xenos to HD4890 what you implied in your post.

Leap was huge in every way possible, and the time between the two generations was ~4 years. In comparison, this time is 8 years. So, to see expectations like 6670 or 4890 is quite interesting, especially since PC hardware has leaped so much that it would be a real shame if no body would take advantage of it. And what bugs me is the fact that enthusiast gamers on PC will be biggest losers in this kind of scenario because no body is crazy enough to push graphics like tech demo SE showed month ago on PCs. If consoles are underwhelming, than expect up-ports to PC, not the other way around.
 
The difference between xbox and xbox 360 in shader terms was indeed huge. As I said, 15x on paper + efficiency gains making a real world shader performance increase of probably something like 25-30x. So it's not comparable to the difference between Xenos and the 4890, nor did I ever intend to imply it was.

But you have to put that in the context of shaders being pretty brand new when the xbox launched and thus they weren't really a key metric of it's performance. They only really started to come in vogue and be considered the key metric of a GPU's performance after the xbox's launch, which explains the massive leap in shader power between xbox and 360. I don't think it's realistic to expect a similar ballooning of shader performance over the same time period given that shaders were already well established and central to Xenos compared to being little more than tacked on in NV2a.

When you look at the other areas of the console though the leap isn't anywhere near as great, around 4x plus efficiency gains. Obviously in modern shader bound 360 games though then the xbox really would be around the 25x slower mark if it could even run those games in the first place.

To put some context around the increase from what kind of increase you can expect 4 years after Xenos, that's about the same time the 5870 launched with about 11x the flops, 6.8x the ROP throughput and 8.5x the texturing throughput. Not accounting for efficiency gains. So clearly shaders are insreasing at a relatively slower pace compared to other aspects of the GPU post Xenos launch vs's pre-Xenos launch.
 
To put some context around the increase from what kind of increase you can expect 4 years after Xenos, that's about the same time the 5870 launched with about 11x the flops, 6.8x the ROP throughput and 8.5x the texturing throughput. Not accounting for efficiency gains. So clearly shaders are insreasing at a relatively slower pace compared to other aspects of the GPU post Xenos launch vs's pre-Xenos launch.

What?

The normal way to look at it would be: raw shading power (measured in peak flops) continues to increase at a faster rate compared to texturing (11x vs. 8.5x) and ROP throughput (fill rate? 11x vs. 6.8x).

Trying to pine away at a more fixed function design (pre-DX9, ATI 9700/NV5800) and the 25x leap you mention and how it is "decelerating" is really a poor sample. e.g. The shader power *tripled* between the X1800 to the X1900 series in a 1 year window while texture and ROP performance inched forward. Where and how you sample the data makes a big difference if you are looking at regressions.

That said the only sane way to look at it is for the better part of a decade now programmable shader (math) performance has increased both in raw throughput and flexibility--as well as consumed a larger and larger part of the die--than texturing and raster output processors. Even your own numbers show that raw shader performance continues to climb faster than the other parts because, as the normal way to look at it, would be raw shading power (measured in peak flops) continues to increase at a faster rate compared to texturing (11x vs. 8.5x) and ROP throughput (fill rate? 11x vs. 6.8x).

This has been true for a long time and discussed in the GPU architecture threads as AMD/NV have moved from a 2:1 to a 3:1 and beyond ratios compared to texturing and with the majority of GPUs really lagging in ROPs as they more closely tied to bandwidth and resolution needs (both of which are growing much, much slower).
 
I just want to remind the rumor, and what lherre said, is that PS4 is 10x PS3, not that the GPU was gonna be a 7970. Regarding the Xbox 3, he said it was gonna be very very powerful, but not give any number.
 
Status
Not open for further replies.
Back
Top