Predict: The Next Generation Console Tech

Status
Not open for further replies.
@cteam: None of those comments make any suggestion of 8970 like performance out of Durango. In fact it's quite the opposite. PS4 being 10x more powerful than RSX put's it more like 5850 level and Durango being even more impressive could easily refer to the CPU and RAM as other rumours suggest and not containing a GPU that's 2-3 x faster than that in PS4.

It totally depends on what you are counting.

If raw flops then 10x RSX (about 250GFLOPs) is a full-fledges Pitcairn (which is about 2,500GFLOPs). But there are so many different ways to slice and dice--Texturing, fill rate, triangles, performance at a set resolution, etc. 10x doesn't mean much by itself.
 
True, but if the intention is to simulate something even more powerful than these 2 6xxx GPU's in crossfire then it would make sense for them to be using the fastest 6xxx series GPU's available.

Why?

You yourself were beating on the TDP issue. Maybe they wanted cheaper, mass available GPUs (i.e. mainstream) for their dev kits?

Maybe as one rumor indicates there will be dual GPUs in the final kit so they wanted to begin synthesizing that environment?

Maybe the memory configuration or bandwidth is more inline with their bottleneck?

Or maybe it is purely cost consideration and they didn't want to invest boatloads into early dev kits when the goal was to get kit out there where features could be utilized with no illusion of matching production performance.

Maybe the goal isn't performance simulation but feature simulation.

I can think of a host of reasons, especially based on some of the rumors, why they could have opted for such. Doesn't make it the case--or even a real rumor, but there could be a lot of reasons why a single, faster, more expensive, and hotter card was not chosen.
 
It totally depends on what you are counting.

If raw flops then 10x RSX (about 250GFLOPs) is a full-fledges Pitcairn (which is about 2,500GFLOPs). But there are so many different ways to slice and dice--Texturing, fill rate, triangles, performance at a set resolution, etc. 10x doesn't mean much by itself.

Yeah but that's not accounting for efficiency gains. We've heard 2x (or more) greater efficiency for shaders in GCN compared with Xenos/RSX but other areas are likely to have increased similarly. I remember reading a review of GCN for example which demonstrated that it's texture units (or was it the ROPS?) were twice as efficient as those in the 6xxx series which themselves are likely to be quite a long way ahead of RSX/Xenos.
 
@cteam: None of those comments make any suggestion of 8970 like performance out of Durango. In fact it's quite the opposite. PS4 being 10x more powerful than RSX put's it more like 5850 level and Durango being even more impressive could easily refer to the CPU and RAM as other rumours suggest and not containing a GPU that's 2-3 x faster than that in PS4.

Both are confirmed insider, .

Since when it is confirmed to be real 8970, it is a rumor and all inline with most of the
rumor, about MS is listened to developer, and doing more exotic GPU design

Ps4 (with the target specs in hand) will be ~10x ps3 (in cpu and gpu) and xbox next will be more impressive[/QUOTE]

Also remember, another insider said more AMD people working on next Xbox compare to next PS4.

So if MS has Impressive CPU, then GPU have to be weak ????
What about if MS want Impressive CPU and Great GPU (listened to develoepr)
They have capacity to do that.
(money, revenue stream, not going to safe route a la PS4 with of the self part)
 
Why?

You yourself were beating on the TDP issue. Maybe they wanted cheaper, mass available GPUs (i.e. mainstream) for their dev kits?

Maybe as one rumor indicates there will be dual GPUs in the final kit so they wanted to begin synthesizing that environment?

Maybe the memory configuration or bandwidth is more inline with their bottleneck?

Or maybe it is purely cost consideration and they didn't want to invest boatloads into early dev kits when the goal was to get kit out there where features could be utilized with no illusion of matching production performance.

Maybe the goal isn't performance simulation but feature simulation.

I can think of a host of reasons, especially based on some of the rumors, why they could have opted for such. Doesn't make it the case--or even a real rumor, but there could be a lot of reasons why a single, faster, more expensive, and hotter card was not chosen.

But the "rumour" said the dev kits have 2x 6xxx's and the final console would be even more powerful. So whether the final console is to sport 1 or two GPU's, they still could have more accurately represented it by using more powerful 6xxx series GPU's.

I can't see why MS would choose to cut costs here if it were possible to match the final consoles performance with off the shelf hardware. How many dev kits do they send out? a few dozen? a few hundred? And what's the premium of fitting them all with 6970's rather than say 6770's? $100 each? That's peanuts to MS. Why would they choose to potentially gimp the all important launch titles of the next gen consoles which are due to rake in millions for the sake of saving a few thousand on underpowered dev kits?

MS certainly speared no expense on the dev kits for the 360 fitting them with the very fastest premium GPU's available at the time.
 
Yeah but that's not accounting for efficiency gains. We've heard 2x (or more) greater efficiency for shaders in GCN compared with Xenos/RSX but other areas are likely to have increased similarly. I remember reading a review of GCN for example which demonstrated that it's texture units (or was it the ROPS?) were twice as efficient as those in the 6xxx series which themselves are likely to be quite a long way ahead of RSX/Xenos.


To quote myself, which you quote above: It totally depends on what you are counting.

Going from a vague generalization about performance (from a rumor no less) and then moving forward and incorporating architectural differences into such without even knowing the architecture it is in, and then arguing about it seems like a fruitless endeavor.

As it stands the best rumor we have (i.e. due to the SmartGlass connection), and it has at least some system info, is the 2 year old product pitch that was 4-6x "performance" with 64 ALUs and 4-8GB of memory. At least that provides a basic context--but even then it doesn't provide enough information to begin the extrapolations you are presenting. I don't know how having less information allows even more knit picking about what it must mean.

So, 10x what... unless you know a lot more than you are letting on (I doubt it based on your posts) that it is akin to Don Quixote raging against the windmill.
 
Both are confirmed insider, .

I'm not arguing against the source or the comments, I'm arguing against your interpretation of them. You're making assumptions based on those comments that simply don't follow.

Since when it is confirmed to be real 8970, it is a rumor and all inline with most of the
rumor, about MS is listened to developer, and doing more exotic GPU design

No, it's not in line at all. Greater than 2x 6970 performance (your rumours Durango prediction) is not even remotely in the same region of 10x RSX performance (the more reliable PS4 rumoured performance. The two rumours do not work together unless Durango is going to be 2-3x the performance of PS4 which is a ridiculous assumption.

Ps4 (with the target specs in hand) will be ~10x ps3 (in cpu and gpu) and xbox next will be more impressive[/QUOTE]

Also remember, another insider said more AMD people working on next Xbox compare to next PS4.

This doesn't prove anything and it certainly doesn't prove that Durango will have 2-3x the performance of PS4. Some rumours are suggesting Durango has 2x (or more) greater RAM than PS4. That alone would be enough to make it "more impressive" if they had similar CPU's and GPU's.

So if MS has Impressive CPU, then GPU have to be weak ????

Of course not. But as I said above, to achieve 8970 like performance (greater than 2x6970) they are going to need a GPU around 3x as powerful as the one rumoured to be in PS4 at 10x RSX. There's a world of difference between that and a GPU that is slightly weaker than the PS4's that when combined with a more powerful CPU and more RAM makes an overall more powerful console, or even a GPU that's slightly more powerful than the one in PS4 to make Durango an all round faster machine.
 
So, 10x what... unless you know a lot more than you are letting on (I doubt it based on your posts) that it is akin to Don Quixote raging against the windmill.

I have no inside sources at all, only what I read on here. I agree there's very little we can take from the current rumours but my understanding of the 10x rumour was that it was talking about performance, not paper flops. Which makes perfect sense as to compare only on flops would be to both undersell the console (from a marketing point of view) and misrepresent it (from an informative point of view, e.g. telling developers what to target).

We can be certain that whatever goes into these consoles will be vastly more efficient than Xenos/RSX so it's a fairly safe assumption to say that a GPU with 2.4TFLOPS of throughput would not equate to "only" 10x RSX performance.
 
Yes. The 7970 is not 10x the current generations performance as the more reliable rumours suggest, it is more like 25x. And it draws far too much power for a console.

Yes a single 8970 is going to be a lot more power efficient than two 6970's at similar or greater performance but it's still going to drag down something like 250-300w on 28nm. Compare that to the ~70w Xenos and RSX were using when they launched and you realise how ridiculous it is.

But 12-14x Xbox 360 performance is not >2x 6970's. It's a single 6850.

Yes and it's nothing more than a fantasy or a severe misinterpretation. I've seen the rumour that mentions something along the lines of "it being like 2 PC's" but that could simply refer to the existence of an APU plus discrete GPU.

For all we know future consoles may have higher power capacities. Right now, I see a big gap being built between low power systems (iPads and Android tablets) and consoles. Due to the fact we are reaching a point of diminishing returns, in the not too distant future it might require a larger box for a console to have graphics that are OBVIOUSLY far beyond tablets.

Also, a single 8970 might only consume 200 watts on 20nm. Personally, I think any of the console makers are making a mistake if they use 28nm. With 200 watts for the GPU, they could then utilize the other 150 watts for other purposes.

Due to the fact it is getting more and more difficult to produce graphics that are massively better than a previous system, a 70w PS4 GPU is not going to have the clout to produce graphics that are obviously far beyond the PS3.
 
my understanding of the 10x rumour was that it was talking about performance, not paper flops. Which makes perfect sense as to compare only on flops would be to both undersell the console (from a marketing point of view) and misrepresent it (from an informative point of view, e.g. telling developers what to target).

We can be certain that whatever goes into these consoles will be vastly more efficient than Xenos/RSX so it's a fairly safe assumption to say that a GPU with 2.4TFLOPS of throughput would not equate to "only" 10x RSX performance.

Ok, I will play game: RE: Performance. Is your situation texture bound? Fill rate bound? Shader bound? Or they seeing a 10x improvement on their current code by dropping it in (is this select parts of the code or across the board in every aspect)? Is the situation best case scenario where the best architectural change slows up a lower bound one from Xenos/RSX or is this a worst case scenario where RSX/Xenos were good and this is the least improved aspect of the new design? Or, by golly, is 10x just a nice round number to express the approximate memory increase or how many polygons they can throw on screen at any one time?

You don't know. Hence the dismissal of other possibilities, even if they are unlikely, and the constant *insisting* it must be one thing or another is just mind boggling. You cannot even categorize what performance is as different pressures change.

I don't think it takes much of an imagination to take a design aimed at addressing a specific situation, e.g. fill rate, the Xbox1 had 6.4GB/s of total system bandwidth and fill rate (especially with transparencies) was a big issue. The 360 went a far way to address this with the ROPs on eDRAM. So software that was previously fill rate bound is no longer (peak speed up with worst case scenario was 40x). So it could be legitimate to have software situations where you were fill rate bound and saw gargantuan performance increases but to call it 40x would be inaccurate--unless the person passing along the rumor was specifically looking at such.

That is why 10x performance doesn't mean squat without a proper context, which the rumor lacks. If the rumor is legit we have no clue how many people the data has been filtered through, how they are deriving their metrics, and what they mean.

If you knew for a fact they were talking very specifically about the end result was running today's code 10x faster you would obviously know more than the person passing along the rumor!

I just don't why you continue to insist that it must be "10x" (whatever that means) post-architectural efficiencies. Please tell, can you tell me what GPU is 10x faster than, say, an X1800?

(Hint: are you going to go run to some architectural metrics and guess architectural increases or are you going to look at gaming benchmarks? Which ones? With what features enabled?)

And that really gets to the end point: Show me where we can take a product we know, e.g. X1800 ATI GPU, and show me a 10x faster GPU.

This should be child's play, after all, because you both know the starting HW and have nearly 8 years of GPU releases to find the product that fits the mold.

I will be quite interested what you can choose that is 10x the performance ;)
 
Also, a single 8970 might only consume 200 watts on 20nm. Personally, I think any of the console makers are making a mistake if they use 28nm. With 200 watts for the GPU, they could then utilize the other 150 watts for other purposes.

If consoles are coming in 2013 they won't be on 20nm.

And we don't even know what an 8970 is, let alone its TDP. iirc recent news indicates AMD's new designs will be 28nm though. The 7970 was 365mm^2 on 28nm and is over 200W. It is safe to assume the 8970 will be FASTER, on the same process, which makes it also safe to assume it won't be making dramatic drops in TDP under load.

Due to the fact it is getting more and more difficult to produce graphics that are massively better than a previous system, a 70w PS4 GPU is not going to have the clout to produce graphics that are obviously far beyond the PS3.

So maybe the console makers are just going to bail out of chasing a path that leads to small nuclear furnaces in every home ;)
 
I'm getting excited by these hd 8970 rumors...crikey never mind 10x..we are talking 30x..

It's going to be a mid range card imo... something like a 8770?

How else are they going to hit a 250w tdp?
 
If consoles are coming in 2013 they won't be on 20nm.

And we don't even know what an 8970 is, let alone its TDP. iirc recent news indicates AMD's new designs will be 28nm though. The 7970 was 365mm^2 on 28nm and is over 200W. It is safe to assume the 8970 will be FASTER, on the same process, which makes it also safe to assume it won't be making dramatic drops in TDP under load.

But wait! It's possible that the 8970 exceeds 4 TFLOPs and that would fit in the 1+ TFLOPs I was told. :LOL:
 
If consoles are coming in 2013 they won't be on 20nm.

Late 2014 for commercial products at best. And even then, 20nm isn't a holy grail. Below is directly from TSMC webpage on their 20nm and this is likely the best case scenario.

TSMC's 20nm process technology is 30 percent faster, has 1.9 times the density, and uses 25 percent less power than its 28nm technology. TSMC 20nm technology is the manufacturing process behind a wide array of applications that run the gamete from tablets and smartphones to desktops and servers.
 
Cteam, can you please start putting links to the quotes you put on B3D? Because that new "rumor" you just put on last page looks like something made up.
 
Last edited by a moderator:
most of the stuff your saying Cteam is complete rubbish abd other things are just plain common sense that anyone could get and get right. (assuming AMD rumors are true)

Like you think AMD is going ot be pushing exotic tech thats completely different from what they are doing now? hell no. There going to say this is what we have, this is where we are going and for this much money we can add this that and the other.

My guess is they then go to sony and to the same thing. if AMD have pulled it off then its really good for everyone involved, Sony/MS get access to some good high performance cores, GPU, memory interfaces and interconnects. AMD gets a truckload of cash to develop something in a direction they wanted to head in anyway.
 
Do you have any technical data that contradicts this? The only "silver lining" I can think of is essentially some memory-hungry techniques may be more feasible but that is a corner case. 4GB, or even 2GB, is a quite substantial frame buffer (especially with virtual texturing) for a 1080p or smaller resolution target. A hit in normal map, texture resolution, etc is going to trend toward the diminish returns area and the 2x performance is going to open up all sorts of visual features -- just go look at gaming benchmarks comparing such disparate GPUs where all the extra power can offer better IQ, better framerates, and a slew of extra visual features. It could be the difference between having quality AO, GI, interactive particles, AA, etc at a silky smooth 30fps and the competitor losing those, going for a lower resolution, and having issues with framerate.

And none of that matters if the size of the memory is so limited that it cannot contain the core of the game let alone the visual assets. Something like BF3 fully decked out with large maps is already pinning at 2GB and their next gen is going bigger yet still as are many other game. Large game worlds require significantly more resources to run which means large memory footprints. This next generation of games are the first post 32b games and it is only reasonable that the game devs will find plenty of things to fill memory with.

In the console space, using 2GB as a disk cache alone will make for a better end user experience than 2x or even 3-4x gpu performance.
 
That would fall into the huge design benefits of more memory: larger worlds, lower post-start upload times, more system features instantly available, less laggy dashboards. I am all for those things (so many devices have such laggy UIs and I was just playing FM4 and I must admit it is a huge turn off with all the load times--I would much prefer a load once, play many approach). But visually 2x GPU performance will allow much prettier pixels--although 2GB vs. 8GB could get ugly for larger game worlds. Like a BF--if one game is fitting all the assets into a full 8GB how do you get all of that to fit into 2GB? Even with a great GPU that sounds like a recipe for some serious pain, especially with the much beloved BDR.
 
Something like BF3 fully decked out with large maps is already pinning at 2GB and their next gen is going bigger yet
But BF3 doesn't use virtual texturing a la Rage, does it? Also, instead of packing in more RAM, how about better texture compression? S3TC is what, 15 years old now? Nearly that, anyway. Surely we can do better today.
 
But BF3 doesn't use virtual texturing a la Rage, does it?

It does for the terrain. There are different approaches to virtual texturing though with different goals (e.g. see sebbbi's posts on the Trials Evolution thread).

Also, instead of packing in more RAM, how about better texture compression? S3TC is what, 15 years old now? Nearly that, anyway. Surely we can do better today.

Compression will always have a trade off in terms of quality and the hardware to decompress (and the latency involved). And it isn't just diffuse but all your other texture layers used for normal/displacement mapping, specular, etc. Geometry eats up a lot of space, especially as worlds grow, assets in the world grow in number and interactivity, and when you want things to occur in real time across a map while the player is somewhere else (i.e. it is harder to just cache and stream in).
 
Status
Not open for further replies.
Back
Top