Predict: The Next Generation Console Tech

Status
Not open for further replies.
I'm sure you are aware of how mGPU setups work currently. They need twice the VRAM as a single GPU setup.
 
That sort of comparison is flawed by design. Chip companies like AMD and Intel have different motivations and timetables for their decisions than a console manufacturer trying to pick the optimum timeframe for launch. Often the first products that come out of a new node is a pipe cleaner that helps to ramp up the production line for other products. This saves money for them and they can use these products for something. I don't see any point in using a cost cutting measure or a pipe cleaner product as some sort of base here. I'm sure they could make a Pentium 1 on 32nm and then the jump during that node would be amazing...

AMD have been desperate to catch Intel in any way they can since 2006, yet still they approached new nodes conservatively, right up until Bulldozer. AMD stayed on 65nm K8 for well over a year before Phenom. This is not like making a Pentium 1 on 32nm as a pipe cleaner, and internet banter aside I think you'd agree. ;)

You can't use something like "Athlon 64 X2 90nm to Athlon 64 X2 65nm offered basically no improvement in raw performance, but a small improvement in perf/watt."

as an argument here, because improvement in raw performance was never intended in that shrink. It was purely a cost cutting measure. No-one can expect a huge gain in performance in situations like that. Perhaps better overclocking a bit. Architectures don't usually scale in a way that sees huge improvements there. The 65nm chip was about half the size and had less cache...Raw performance was not the target there and most of your other examples are similar or flawed in some other way (2x bigger die giving 2x performance only due to the fact that the initial die was tiny). The optimal strategy for chip making is not the same as launching a game console.

You're kind of implying that there has been no deliberate attempt to arrange things the way they turned out. In essence, there is no tick-tock style strategy, it's just that the new, larger chips weren't ready and so they used the node to save money while just ... waiting? ... for a new chip. At least that's what it sounds like.

I don't think engineering works this way, at least not if you believe the stuff the about microprocessors taking years to develop and bring to market and needing to be designed for a specific process node with a specific foundry. Given the regularity with which this has been happened I believe it's planned.

My quess would be that the CPU in WiiU wont be a very aggressive design and therefore it doesn't require smaller process than that. New process also will take time to become cheaper than the one that still has high volume. Nintendo's design might fit nicely to older process and get costs benefit that way, it also puts a tighter ceiling on what you can put there.

Well ... Nintendo are trying to cram a current gen beating, next gen competitive machine into a tiny, cooling starved box and they must need a business driven, engineering based reason for choosing 45nm when smaller, faster, and (supposedly) ultra power efficient processes will be available (supposedly). They intend to make the console for the next 6 - 8 years (at least) so going by your not unreasonable reasoning* that would override any short term interest in using an outdated and uncompetitive node.

*Later in your post you say "Later in the node you get better yields yes. How much that matters if you compare console life cycles of 2012-2020 vs 2013-2020 is debatable. Especially if the earlier launch helps you in other ways and the chips wont be that big in the first place". If there's anyone that would apply to it would Nintendo in their tiny little 4 ~ 5mm cooled shell, surely?

I happen to think 45nm is probably their only realistic option for both mass availability and manageable costs.

I don't quite understand what you are saying here. Intel launched Hex core on 32nm (i7 980x) in March 2010 before any quads on that process if I'm not mistaken, but there were dual core Clarkdales slightly before that. Sandy Bridge was the first 32nm quad early 2011. Intel is making anything between 2-8 cores on 32nm currently, although two cores are disabled on the native 8 core chip (3980X) I'd definitely say that Intel's Tick Tock strategy has always brought major architectural changes on a new node. Core 65nm, Nehalem 45nm, Sandy Bridge 32nm.

I think it's pretty clear that I'm saying that Intel, like AMD, have brought their larger, much more powerful processors to process nodes that they're already established on!

Intel are a phenomenon, unlike any other chip design or fabrication house in the world, and yet with 1 -> 2 cores they made the transition on the same node, and also with 2 -> 4 cores they made the transition on the same node. I mistakenly said that, from memory, I thought that they went 4->6 nodes on the same node. They didn't, as you point out. They went 2 -> 6 -> 4 cores, still establishing themselves with a smaller product on the node first. That's one minor, partial deviation from a chip manufacturer with unparalleled design and manufacturing resources is pretty much the only wobble on the line.

Even then the step up to Sandybridge (2 -> 6 -> 4) came late on in the node, and saw the most advanced chip that Intel have made. It even beats the 6 core Gulftown in some scenarios so once again, there continue to be big gains later on the same node. It's a theme that one simply can't get away from, from either AMD or Intel on any node.

It's pretty clear that a node change isn't the only thing responsible for jumps in power, with some enormous jumps in chip power happening (some of the very, very biggest) during the lifetime of a node. I really don't think anyone can argue against this with a straight face.

In any case Intel's doings aren't really relevant either, because they are playing different game also.

Well, at least they're making high performance CPUs, like AMD. I think they're more relevant to console CPU's than PC graphics chips ...

360 launched November 18 in NA and early December in Europe. By February it was quite easy to find a unit. PS2 for example had far bigger shortages. 360 supply was able to meet the demand quite quickly. I'm still happy with my 1-2 months, but if it helps add one or two more as it doesn't change anything.

So they had extremely limited availability for the entire of their first Christmas - Christmas being by a huge margin (enormously massively huge) the most important period of the year. This is really important, as they missed out on millions (literally) of sales and lost much of their first mover advantage.

In the UK, the enforced bundles were starting to become less outrageous by February, but in many places they persisted until the summer. The 360 Core had horrible value bundles for quite a long time (scrubby pack in game and the 5 byte memory card).

I think MS, Sony and Nintnedo will be planning on shipping millions in their first year with their next systems.

I brought that up, because that example is the best and proper argument for your angle and actually what you should have brought up in the first place instead of something like Athlon 64 X2 90nm vs 65nm.

I'm sorry but I really don't think that is in any way the best or proper argument for talking about console chips, especially CPUs. And you''ll note I wasn't just talking about two particular AMD CPUs, I was talking abut all AMD processors in the last 5 years and all Intel processors in the last 5 years.

I really, honestly don't agree with you.

My point at the same token was that, as that truly is the best case for your angle. Those percentages happened on a absolute monster 500mm2 chip that would never have any business inside a console. Smaller mid range-ish chip is not going to run into problems of that magnitude and therefore in the console realm the problem is not going to be as big as in that example.

You're wrong!

Those percentages (10-20% faster with 10 - 20% less power consumption) equal something like a 35% performance / watt advantage. That's huge. That really huge. Just look at Sebbi's comments about fighting for a single percentage point gain in performance, or Bkilians comments about the lengths hardware manufacturers go to in order to control costs (think of the extra power regulation and cooling requirements needed just to maintain parity with a competitors machine). Look at the performance difference between Zenos and RSX, and all the port comparisions and forum arguments and pixels counts.

You can bet that MS, Sony and Nintendo will care about things like that. Anyone on the hardware teams that expressed such sentiments would probably be lead out infront of a firing squad. :p

Those are not gains. Different products for different purposes/goals

You implied that almost nothing on 40nm changed from the start, so I was pointing out that AMDs 40nm chips followed the same, boring old pattern as their CPUs (and some other previous graphics nodes). "Different products" just happens to fit with this idea of start small -> grow big way of doing things and I don't think it's a coincidence. As an added bonus we later got the high performance / power efficient mobile parts! (That was just a bonus for my argument so yeah, feeling a little bit smug here. :eek:)

It shouldn't be a surprise though because the whole 480 -> 580 fits too, despite being "same products for same goals". It all fits. It always fits ...

Later in the node you get better yields yes. How much that matters if you compare console life cycles of 2012-2020 vs 2013-2020 is debatable.

Well, you were guessing it could matter with the Wuu. I think it could potentially matter for all of them, just like it's been shown to matter for Intel, AMD and nVidia. But it could well be a balancing act.

I think they'll all want to start strong, with millions of units in the first few months and definitely by the end of the first Christmas. That means whatever processes they start on musn't be a Llano/32nm damp squib. Or, by the sounds of it, like Global Foundries' Fusion nullifying 28nm.
 
A 28nm GPU with a modest last gen GPU footprint (230mm^2) won't impress

A 28nm GPU with a modest last gen GPU footprint (230mm^2) will be have AMD HD 6870 (Barts) like performance?

Humor me for a moment with this punchy punch-line: Next Gen (Xbox 3, PS4) graphics can be bought, right now, for about $155 (CPU not included). Yes, 2010 graphics chips may match 2013 consoles. And by the time the consoles launch in 2013 contemporary PC GPUs will be twice as fast and cost the same (or less!) than a new console. Punchy, right? Sadly the facts are trending this direction.

At face value it would seem a 28nm GPU, the guestimated target for next gen chips, could exceed 3GFLOPs (maybe even close in on 4) and 70GT/s texturing by simply moving a 40nm Barts AMD HD6850 (6870 with disabled units) down to 28nm but keeping the 255mm^2 footprint. Such a design would not be a top of the line 2013 GPU but it would be quite competitive. 28nm should double density (right?), offer more frequency (right??), and a big reduction in power draw (right???) … but reality isn’t as sweet. This is one reason I am not super excited about a 28nm console. I think console makers are looking at slightly reducing their silicon footprints from last gen and with additional chip manufacturing issues (and eye toward future reduction) and the dirty details about what a node reduction spell out, in my below math, a 2GFLOPs and 50GT/s GPU on 28nm (roughly AMD HD6870 performance—a far cry from the 3+ GFLOPs 70GT the above simple projection would indicate).

Persuade me: Give me intelligent reasons why I should keep my hopes up for a 3+ GFLOPs monster console GPU at 28nm.

Until then, let me convince you, and depress you, that a 28nm GPU in 2013 in a console is going to be no 2x 6850 but instead a single 6870-like chip.

Let’s start with budget. Last gen was about 230-260 mm^2 range at launch for GPUs in a console. We should consider this the upper bounds for silicon next gen as processes haven’t reduced chip costs significantly and the advent of motion controls and importance of storage media will be pressing on silicon budgets.

Let’s be conservative and see how a similar budget on 28nm would look like. Some basic information:

28nm is half the size for the finest geometries (e.g. SRAM) compared to 40nm. Logic is not as dense.

40nm is mature so 28nm won't be as robust, will be more expensive, and have lower yields. 80% scaling is optimistic IMO.

Architectural and efficiency differences aside (Xenos > RSX) last gen consoles look something like this (from memory, and depending no how you count, so don’t shoot me as I know numbers below are wrong as I did this from memory on a lunch break but I wanted some context):

Code:
Model   MHz   area   transistors   GFLOP  TMUS   ROP
Xenos   500   230    232          240   16     8
RSX     500   255    300?          230?   24     8
Using AMD’s current (fall 2010) models as a baseline of what a modern GPU architecture and budgets looks like:

Barts: 255mm^2, 1700 transistors, VLIW5
Code:
Model   MHz   Shad   TMU   ROP   GFLOP   TDP
6790    840    800    40    16    1344   150
6850    775    960    48    32    1488   127
6870    900   1120    56    32    2016   151
Cayman: 389mm^2, 2640 transistors, VLIW4
Code:
Mod     MHz   Shad   TMU   ROP   GFLOP   TDP
6950    800   1408    88    32    2253   200
6970    880   1536    96    32    2703   250
  • 6870 to 6850: 14% drop in Shaders, TMUs, and Frequency; 27% drop in GFLOPs
  • 6970 to 6950: 9% drop in Shaders, TMUs, and Frequency; 17% drop in GFLOPs
Let’s acknowledge the following: the TDP scaling on PC GPUs doesn’t fit with a consoles metrics, silicon footprint of PC GPUs is far above the cost tolerances for consoles, 28nm won’t provide 100% real-word increase in transistor density, 28nm is going to be more expensive (yields, demand, general cost of progress, competition) in 2013 than 90nm was in 2005, the success of the Wii in the $250 price bracket has the console manufacturers more price sensitive (complicated issue), the cost of large standard storage and Kinect/Move like devices need to be compensated for in other aspects of the design, new technologies (stacked memory, Silicone Interposers, etc are not free), the RRoD/YLD and the mindfulness to decrease TDP/increase cooling through better coolers on original units* and increase volume, etc. Put all together a 300mm^2 GPU doesn’t seem to be a target console makers will be reaching for.

Assuming an AMD chip a major wildcard will be the transition to GCN / DX11.x+ architectures which will have additional feature costs and overhead not currently represented in the Barts/Cayman models. There will also be a transitioning from VLIW to SIMD (+ Scalar) with South Island (GCN; http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/3).

Four major numbers to keep in mind: (1) a 10% reduction in area from 255mm^2 (Barts) to a more conservative 230mm^2; (2) the 15% redundancy seen in Barts models (6870 and 6850; note that Xenos already had redundancies like this so this will need to be factored into the smaller die size; 15% is aggressive as we see 9% in Cayman); (3) 22% drop (900MHz to 700MHz) in frequency, again aggressive but TDP is a major issue and seeing the TDP drop between a 6790 840MHz and 6850 775MHz even though a 6850 is a faster part; and (4) 80% scaling from 40nm to 28nm.

Applying these to a 40nm GPU 1 & 2 will result in about 25% reduction in functional units from a 6870 and with 3 we are looking at a total performance drop (about 1200GFLOPs) of nearly 40% from a 6870 (2016GFLOPs) and 19% from a 6850 (1488GFLOPs) to this hypothetical GPU on the 40nm process. Scaling first upward to 28nm (80% increase in density; 1.8 * 2016 = 3628GFLOPs) and then reducing for redundancy and the smaller die (about 25%; 3628 * .75 = 2721) arrives at about 2700GFLOPs. We are looking at a net functional unit scaling of a about 35% above a 6870. Reducing the frequency to a more reliable and power efficient 700MHz (22% drop) arrives at about 2100GFLOPs which is between today’s Barts 6870 and Cayman 6950.

Looking at some of these factors and expectations:


  • 230mm^2 = smaller side of last gen GPU footprint which ranged from 230-260mm^2; 10% less area than Barts (6870, 255mm^2)


  • 2.75B transistors = 230mm^2 Barts style GPU with 80% scaling from 40nm to 28nm


  • 700MHz = Less than a Barts 6850 (770MHz, 127W TDP), 6870 (900MHz, 151W), Cayman 6950 (800MHz, 200W), and 6970 (880MHz, 250W). 28nm should bring a solid reduction in power but the increase in transistors is going to scale up power draw. A 6850 is a reasonable 127W considering the 128GB/s of memory bandwidth but a console will need to accommodate the optical drive, HDD, CPU and system memory, etc. With costs (yields/binning) and the RRoD (and YLD) firmly in memory conservative clocks will be likely although the “turbo” features in current GPUs indicates that 700MHz is on the very low end of what should be expected. Comparing a 6790 @ 840MHz 150W max TDP versus a 6850 @ 775MHz 127W max TDP indicates a chip with more functional units and more net performance uses less power than a chip with fewer units at higher frequency.


  • 2100GFLOPs = 80% scaling from 40nm to 28nm minus ~ 10% for space reduction (Barts 255mm^2 to our 230mm^2), ~ 15% for redundancy, and ~ 22% reduction in frequency. GFLOPs may also be hit by the new SIMD+Scalar GCN architecture and DX11.1 overheard as well as additional raster pipelines; this may be higher due to many units not needing to scale and shaders are often easier to pack in. e.g. It is unlikely ROPs will scale from 32 to 64, there may even be a reduction to 24 or 16 ROPs on a console GPU, so this space may be utilized for more Shader units.


  • 76 TMUs, 52.5GT/s = What is a TMU? I am basing this on Barts style TMUs with 56 TMUs at 35% increase of units. For comparison Cayman has 96. A 6870 (56@900MHz) is 50.4GT/s; 6970 (96@880MHz) is 84.5GT/s.


  • 16 ROPs, 11.2GP/s = Or 24. A 6870 is 28.2GP/s (32 ROPs @ 900MHz). When Xenos and RSX shipped they had 8 ROPs when competing PC GPUs had 16. Consoles will have some limiting factors like targeting at most 2GPixel resolutions 1080p (and possibly 2 x 1080 with 3D / 2 player “split” screen) but most games will be 720p 30Hz scaled up to 1080p; there will also be a limiting factor of memory bandwidth. Consoles are about maximizing resources and 32 (or 64) ROPs doesn’t seem like an investment console designers will make when that area could be spent on more shaders.

I think this is on the conservative side. The CPU will be smaller than Xenon IMO (and most certainly CELL) and even this conservative GPU considers the fact more processing will be sent to the GPU.

I would like to think the above is wrong (or a reason not to have a new console on 28nm in 2013!) Putting this into perspective this theoretical GPU is a hair faster than a 6870 in GFLOPs and GT/s but not even half as fast in fillrate.

Code:
Mod     MHz   Shad   TMU   GT/s   ROP    GP/s   GFLOP   TDP
-              700    76   52.5   16     11.2   2100     -
6790    840    800    40   33.6   16     13.4   1344    150
6850    775    960    48   37.4   32     24.8   1488    127
6870    900   1120    56   50.4   32     28.8   2016    151
6950    800   1408    88   70.4   32     25.6   2253    200
6970    880   1536    96   84.2   32     28.2   2703    250
Simply put: I am not sure there is much stomach for future proofing. The early 2011 GPUs and the Fall 2012 GPUs are going to be a lot faster on the PC side. Further, I predict a mid-range PC GPU will be (1) faster and (2) cheaper. The Xbox and Xbox 360, even PS3, had relatively solid GPUs at the time of their launch but this does not seem likely. Cost, reliability, and the versatility of what a console needs to do shifts budgets.

As for cost I think the above leaves a lot of budget for a competitive prices console. One could argue to reduce things even further, but there is always a base cost and things only get so small. Looking at the retail costs of these models (6790 1GB $149, 6850 1GB $179, 6870 1GB $239, 6950 1GB $259, 2GB $299, 6970 2GB $299) and considering the fact both AMD and their distributors and the retailers indicates that even at the high end for a 1GB 255mm^2 chip ($239 retail) the actual cost is far below this and makes is a viable console part. As of today I see 6950’s 1GB at $220, 2GB $239, and 6870’s 1GB at $155 at NewEgg (11/29/2011).

There you have it folks: next gen graphics can be bought, right now, for about $155 (CPU not included).

If there is a glimmer of hope is that if AMD/Distributors/Retailers can all make their cut on a $155 product now, after a node reduction and going with a mild “loss leader” model you would think and hope that a $299-$399 console could pack in a lot more punch—but I don’t think the console makers are thinking along these lines.

And I haven’t even touched memory.

I will throw out this wild card: I think the smaller design above fits well with the cost considerations of stacked memory (higher performance, low power) and a Silicon Interposer (SI). The bigger your GPU the more expensive the SI as it has to fit the GPU and Memory.

We may see a setup with 1GB of very fast memory for the GPU (with an additional 2GB for system memory) and a GPU, following the above, but some concessions to get the GFLOPs up a bit as I think, at least for MS, a lot of processing will be moved to the GPU.

GPU Stats: http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

* As new models get new processes the cost of cooling will also go down. So an extra $5-$10 on better cooling (compared to a 360) on a launch unit is easier to justify as this cost will be reduced on new models where more aggressive cooling is not necessary.

Time to go back and compare my 2006 predictions!

Btw, this theoretical GPU, with 2GB UMA, would be about 10x faster in raw metrics (GFLOPs, Texturing, etc) and 4x increase in memory compared to current consoles. Factor in the 2.25x cost for 1080p (and double again for full 3D) and quite frankly: This is not really impressive. 28nm may not be dense enough to deliver a true *traditional* next gen experience at the budgets console makers will likely be looking at. 20nm with FINFETs and the hopeful emergence of relatively affordable memory stacking may offer a huge jump over 28nm. The issue is TSMC probably won’t have solid product until 2015… if they don’t choose to chuck the roadmap. Again.

Ps- Sony/MS please, one of you, prove me wrong.
 
Last edited by a moderator:
I'd say they just repackage current XB360 into a set-top and sell the next-gen system as a real gaming console.
Yeah, I am not sure what the advantage is of a totally new design when, if they wanted a set top box, a focus on shrinking the current Xbox could do the same thing.
 
the cost of large standard storage and Kinect/Move like devices need to be compensated for in other aspects of the design

There is no confirmation these will be included in a "core" type next gen SKU.
 
Josh, simply addressing your point about the GPUs: it basically boils down to the fact that PC GPUs across the board use way more power and are far bigger now than they were in 2006, and console's power envelope hasn't gotten any bigger (probably shrunk).

Even so, I expect the nextbox to have at least a 6950 level GPU, and be based on GCN or Kepler for DX11.1 compliance and robust GPGPU capabilities. Reason being that a console GPU can cut out all the unnecessary stuff that we have in PC parts. Hell remember the Sideport? That thing was never used !
 
Josh, simply addressing your point about the GPUs: it basically boils down to the fact that PC GPUs across the board use way more power and are far bigger now than they were in 2006, and console's power envelope hasn't gotten any bigger (probably shrunk).

Hey Homer (we miss you on Live!) I am not sure about that? It looks like Barts is right down the line a console friendly design in terms of TDP?

The 6850 GPU which has 1GB of GDDR5 with an effective frequency of 4GHz and a Bart chip (255mm^2 chip @ 775MHz on 40nm) has a max TDP of 127W (from wiki and supported here). This is for the entire board.

As you mention console designs do cut corners and I mentioned some of that. But the other aspect is moving to 28nm on TSMC allows a 45% increase in performance for the same gate leakage. Obviously some of that performance benefit is going to be scaled back to compensate for power drawn AND to accommodate the increase in gates. I also mentioned reducing the die size by 10%. AMD has also mentioned GCN will be aimed at efficient per Watt so architecturally this, in conjunction with console corner cutting, I am not seeing Barts as totally un-console friendly? I think these things all together make a friendly console part as that leaves 80-120W for the rest of the system (CPU, cooling, drives, etc). If that is not enough--and it may not be--the next thing to get downgraded is the memory as it is quite power hungry.

I am rusty at this, so point out my error :D

Where did I offend the TDP Dragon?

Even so, I expect the nextbox to have at least a 6950 level GPU, and be based on GCN or Kepler for DX11.1 compliance and robust GPGPU capabilities. Reason being that a console GPU can cut out all the unnecessary stuff that we have in PC parts. Hell remember the Sideport? That thing was never used !

What I was projecting just shy of a 6950!

So we agree ;)

And I hope that is the lower boundary because I think, long term, that a lot of leg room can be found in development by moving tasks over to the GPU. Skimping a little on the CPU (pretty poor options on the console front anyhow) and getting some additional GPU Compute horses is what I would like to see.

But no one asked me!
 
My Live account expires in a couple of weeks and will not be renewed. RIP Homerless :(
Of course I sold my Xbox months ago to dedicate my gaming life to the PC so Live would be quite the waste of money.

Yeah Barts is just about right for a console part on 40nm. I don't think we disagree on anything.

I was saying that PC GPUs in general will handily beat even the next gen consoles from here on out, because on PC we have accepted higher TDPs, bigger chips, and wider memory buses as the norm. Even mid/low range PC GPUs are rocking 150W TDPs and 256bit memory buses these days. No console will ever have these advantages.

This is not to say the consoles will be incredibly weak. They will just have to be more clever about how they deal with problems like memory bandwidth.
 
I hope not...

I will bite.

Give me 2 or 3 options for viable chip contracts (i.e. no Intel) for a console CPU available on 28nm in 2013 and the TDP and area we are looking at and then I will respond.

Deal?
 
I was saying that PC GPUs in general will handily beat even the next gen consoles from here on out, because on PC we have accepted higher TDPs, bigger chips, and wider memory buses as the norm. Even mid/low range PC GPUs are rocking 150W TDPs and 256bit memory buses these days. No console will ever have these advantages.

This is not to say the consoles will be incredibly weak. They will just have to be more clever about how they deal with problems like memory bandwidth.

Totally agree. Given that console manufacturers need to stick to a ~$300 sticker puts certain limits on raw horse power.
However, given a well balanced, smart console design, the lack of raw horse power can be compensated with clever and "to the metal" programming.
 
Sure PCs will perform better than next-gen consoles.

But those PCs won't cost $300 or $400. Probably not even $800.
 
I was saying that PC GPUs in general will handily beat even the next gen consoles from here on out, because on PC we have accepted higher TDPs, bigger chips, and wider memory buses as the norm. Even mid/low range PC GPUs are rocking 150W TDPs and 256bit memory buses these days. No console will ever have these advantages.

With regards bandwidth(256bit), either next gen xdr tech or optical interconnect(maybe not next gen but next next gen) will one day help with that. With regards to TDP, I'd have to see how far performance scales as you increase it. I know that SLI with microstutter and compatibility issues does not seem to offer great otherworldly benefits, while vastly raising TDP and costs. I've also seen that the cost/power in cpus and gpus going from mid-range to high-end enthusiast is often not worth it, a few 10s% improvement for doubling or tripling of costs and massive noise and heat output, not sure if things have changed in that regard.
 
With regards bandwidth(256bit), either next gen xdr tech or optical interconnect(maybe not next gen but next next gen) will one day help with that. With regards to TDP, I'd have to see how far performance scales as you increase it. I know that SLI with microstutter and compatibility issues does not seem to offer great otherworldly benefits, while vastly raising TDP and costs. I've also seen that the cost/power in cpus and gpus going from mid-range to high-end enthusiast is often not worth it, a few 10s% improvement for doubling or tripling of costs and massive noise and heat output, not sure if things have changed in that regard.

Erm, you know this hypothetical next-gen XDR memory will benefit from a larger bus just the same as all other memory does...

Also I don't know how SLI came into the conversation. That came out of nowhere and I don't know how it's relevant or how to respond :cry:

There are diminishing returns at a certain point for every microprocessor when it comes to TDP, but you can go well past the console TDP limits and still get very nice perf/watt returns. E.g. an 800MHz GTX460 is quite a bit faster than the stock part while staying within very reasonable TDP limits for a PC GPU.
 
Erm, you know this hypothetical next-gen XDR memory will benefit from a larger bus just the same as all other memory does...

Also I don't know how SLI came into the conversation. That came out of nowhere and I don't know how it's relevant or how to respond :cry:

There are diminishing returns at a certain point for every microprocessor when it comes to TDP, but you can go well past the console TDP limits and still get very nice perf/watt returns. E.g. an 800MHz GTX460 is quite a bit faster than the stock part while staying within very reasonable TDP limits for a PC GPU.

Next gen XDR would benefit from a better bus, but it can help cover the difference especially if it remains unused in the pc arena, and it would only be a temporary bandaid. Optical interconnect technology if it can be brought to consumer electronics I would guess should probably deal away with this problem.

As for sli, console TDP will likely be in the 200-ish W, though I've heard consoles have shipped with power supplies up to near 400 W.

An entire PC with a 580 and an overclocked intel i7 processor consumes 4xx-ish W, consoles will not deal with power hungry overclocked x86 cpus, power hungry large HDDs, etc.

Keep in mind this is enthusiast grade, a mid range with far lower power consumption offers probably 15-25% lower performance, certainly not a mind-boggling drop.
 
I don't think it's privy to take about current GPU pricing or any pricing at all when talking about console GPUs.

Price not going to limit Sony or MS when it comes to GPUs.
 
Can't seem to edit my post.

Prohibitive Energy/Performance gaps seem to occur when we get into sli sets.

People consider a 2-3x performance gap reasonably close, and I doubt there's a 2-3x performance gap between console viable midrange and enthusiast when excluding sli setups, the gap is usually a few 10~s%.
 
Status
Not open for further replies.
Back
Top