Predict: The Next Generation Console Tech

Status
Not open for further replies.
That's essentially what I've been proposing, though not clocked at 1GHz obviously.



How could you take that as Epic being wrong? Also it doesn't have to do with an "AMD sucks" viewpoint, it's simply looking at the information they've given us.

Epic gave us a formula to work with for Samaritan. It's very dependent on the resolution. Based on the supposed resolution (2560x1440) of the original you're looking at 4.4 TFLOPs. Three 580s is 4.5 TFLOPs. So now we see why they needed three 580s to begin with. As a comparison, a 6970 is 2.7 TFLOPs. I think anyone who reads these boards know two 6970s don't come anywhere close to three 580s.

Looking at their slides, the amount of FLOPs comes down by reducing the resolution. They would still need two 580s to run the demo at 1080p. At the same time a 7970 is ~3.8 TFLOPs. Considering it didn't blow away one 580 in benchmarks, there's no reason to believe one Pitcairn could handle Samaritan at 1080p.

I don't believe the gains of a closed environment are going to be enough based on how I see the console GPUs to be. Now if UE4 is designed to make that more efficient, then yeah I could see 1080p reached. But based on the info Epic themselves have given us, I don't see it as possible.

I think The Samaritan demo ran at 60fps as well, so if you halve it then the requirement should be even lower and even one 580 gtx should be enough. In worst case scenario you ditch the 4XMSAA and throw in FXAA which should boost 20fps or so.
 
UE4 will most likely change how they render things quite radically, so yes things might change a lot..

IE.
If they decouple shading from resolution 1080p shouldn't be a problem.. ;)

Unfortunately while I didn't come close to finishing school and life took me even further away, my focus was on the hardware side so I'll take your word for it. :LOL:

But I did feel it might be possible.


I think The Samaritan demo ran at 60fps as well, so if you halve it then the requirement should be even lower and even one 580 gtx should be enough. In worst case scenario you ditch the 4XMSAA and throw in FXAA which should boost 20fps or so.

It must have been 30fps because 4.4 TFLOPs is based on 30fps. 60 would obviously double that.
 
They would still need two 580s to run the demo at 1080p. At the same time a 7970 is ~3.8 TFLOPs. Considering it didn't blow away one 580 in benchmarks,

A 580 is ~1.6 TF I believe, so you actually need ~1.56 gtx 580 to run Samaritan at 1080P, not 2 (it also fits with 2500x being about 2x 1080P pixels, so presumably 1.5 580 vs 3 again). Also, you're basing all of this on terribly unoptimized PC software which imo is foolish. Who's to say that in a console, a 6970 wouldn't indeed heavily outpower a GTX 580 as it's teraflops rating suggests?

And indeed, 7970 is probably 1.5 times faster than 580 in some cases already, and Nvidia's GK104 is also rumored to be with a 3 TF rating. Granted this doesn't get us down to pitcairn, but it gets us within spitting distance. And I bet console optimization gets us the rest (with tons of room to spare probably!)

I think this is a fools errand anyway, we're basing this on some very old unoptimized demo none of us really know anything about. If Epic says 2.5 teraflops, and they dont specify only on an nvidia card, I think it's generally true, but I'm sure even they didn't mean it to be 100% precise. What if Samaritan ends up running on console X with a slight cutback here and there (as others have noted, sub FXAA for MSAA or the like). Is anybody really going to notice, as long as in the main it's there?
 
A 580 is ~1.6 TF I believe, so you actually need ~1.56 gtx 580 to run Samaritan at 1080P, not 2 (it also fits with 2500x being about 2x 1080P pixels, so presumably 1.5 580 vs 3 again). Also, you're basing all of this on terribly unoptimized PC software which imo is foolish. Who's to say that in a console, a 6970 wouldn't indeed heavily outpower a GTX 580 as it's teraflops rating suggests?

And indeed, 7970 is probably 1.5 times faster than 580 in some cases already, and Nvidia's GK104 is also rumored to be with a 3 TF rating. Granted this doesn't get us down to pitcairn, but it gets us within spitting distance. And I bet console optimization gets us the rest (with tons of room to spare probably!)

I think this is a fools errand anyway, we're basing this on some very old unoptimized demo none of us really know anything about. If Epic says 2.5 teraflops, and they dont specify only on an nvidia card, I think it's generally true, but I'm sure even they didn't mean it to be 100% precise. What if Samaritan ends up running on console X with a slight cutback here and there (as others have noted, sub FXAA for MSAA or the like). Is anybody really going to notice, as long as in the main it's there?

In a similar discussion before I also said this is just one company's take on achieving next gen so it's not like Samaritan would apply to everybody's games for the next consoles.

That said it's basic math that they would need two 580s to handle the 2.5 TFLOPs of a 1080p target. How are you going to achieve 2.5 TFLOPs on a 1.5-1.6 TFLOP GPU? You've got 900-1000 GFLOPs left over and they aren't magically going to run themselves. I also don't know where you're getting the info to calculate it the way you are. Where does the "2x 1080p pixels" from? It's like you're trying to somehow reduce what Epic said to fit in your view. The slide they provided said 1920 x 1080 x 30(fps) x 40000 (ops per pixel) = 2.5 TFLOPs. There is no 2x to bring it down to what I'm assuming 1.25 TFLOPs and therefore needing only one 580.

Of course they didn't specify only an nVidia card. But you seem to be blatantly ignoring how clearly different nVidia and AMD reach their numbers in that category and how in turn that affects how many GPUs would be necessary to run the demo from each provider. And we all know benchmarks don't show one AMD GPU to be on par with two nVidia GPUs. Also common sense should say that switching to a closed environment won't change that. AMD GPUs aren't going to magically change how they were architecturally built to make their theoretical FLOPs "more accurate" in a closed environment. I'm sure there is plenty of info out there (and even people reading/posting in this thread) that could explain why AMD/ATi GPUs show a much higher FLOP amount, yet don't show the difference in benchmarks.

Also I already said UE4 might be more efficient in handling this therefore making it possible for a console to handle it at 1080p/30 so there's no need to harp on optimization or me being "foolish" as some kind of counterpoint. As I said, based on what Epic has said as of now, and not removing or changing anything (e.g. using FXAA instead of MSAA) it's not possible for one AMD GPU of the level I expect in the next consoles to achieve that at 1080p.
 
Ti500 was ~50GFlops ? it should be 320x240@30FPS or so lol :D

Haha. About 15fps to be more exact.

obviously the quote was supposed to have meant a single 580 card...but no I didn't think the joke was funny...:LOL:

Considering Epic released that info almost a year after debuting Samaritan and said info still seems to be inline with what they said back then, I would assume they haven't achieved that yet.
 
In a similar discussion before I also said this is just one company's take on achieving next gen so it's not like Samaritan would apply to everybody's games for the next consoles.

That said it's basic math that they would need two 580s to handle the 2.5 TFLOPs of a 1080p target. How are you going to achieve 2.5 TFLOPs on a 1.5-1.6 TFLOP GPU? You've got 900-1000 GFLOPs left over and they aren't magically going to run themselves. I also don't know where you're getting the info to calculate it the way you are. Where does the "2x 1080p pixels" from? It's like you're trying to somehow reduce what Epic said to fit in your view. The slide they provided said 1920 x 1080 x 30(fps) x 40000 (ops per pixel) = 2.5 TFLOPs. There is no 2x to bring it down to what I'm assuming 1.25 TFLOPs and therefore needing only one 580.

Of course they didn't specify only an nVidia card. But you seem to be blatantly ignoring how clearly different nVidia and AMD reach their numbers in that category and how in turn that affects how many GPUs would be necessary to run the demo from each provider. And we all know benchmarks don't show one AMD GPU to be on par with two nVidia GPUs. Also common sense should say that switching to a closed environment won't change that. AMD GPUs aren't going to magically change how they were architecturally built to make their theoretical FLOPs "more accurate" in a closed environment. I'm sure there is plenty of info out there (and even people reading/posting in this thread) that could explain why AMD/ATi GPUs show a much higher FLOP amount, yet don't show the difference in benchmarks.

Also I already said UE4 might be more efficient in handling this therefore making it possible for a console to handle it at 1080p/30 so there's no need to harp on optimization or me being "foolish" as some kind of counterpoint. As I said, based on what Epic has said as of now, and not removing or changing anything (e.g. using FXAA instead of MSAA) it's not possible for one AMD GPU of the level I expect in the next consoles to achieve that at 1080p.

The original samaritan was running at 2500X whatever, right? With 3 580's. My point is 1080P is half the pixels of 2500X1600, so you need half the power or 1.5 580's which corroborates our other data.

Obviously I dont expect a 580 to be split in half, but pretty sure that has no bearing on hypothetical future console GPU's, unless you expect both consoles to ship with stock 580's.. You need ~150% performance of a 580, not necessarily 2.

Also I already said UE4 might be more efficient in handling this therefore making it possible for a console to handle it at 1080p/30 so there's no need to harp on optimization or me being "foolish" as some kind of counterpoint.

And I think you're drastically underestimating the huge gains that will come with console optimization. Now a better question is whether Epic was considering that with there 1.5 TF number. Given it wasn't console based you can argue they were not.

And we all know benchmarks don't show one AMD GPU to be on par with two nVidia GPUs. Also common sense should say that switching to a closed environment won't change that. AMD GPUs aren't going to magically change how they were architecturally built to make their theoretical FLOPs "more accurate" in a closed environment. I'm sure there is plenty of info out there (and even people reading/posting in this thread) that could explain why AMD/ATi GPUs show a much higher FLOP amount, yet don't show the difference in benchmarks.

My belief is in a closed box AMD GPU's would, or at least might, better reflect their flop advantage. Regardless now with GCN the difference between AMD and Nvidia flops has drastically narrowed. GK104 might be about the same performance with rumored 3 TF as Tahiti is with 3.8, which is not such a huge difference.

Pitcairn 7870 has 2.56 TF's and I suspect under Epic's guidelines it could run Samaritan at 1080P, and I also believe it is feasible in consoles.
 
What are your thoughts of Radeon 8xxx series, what can AMD improve over 7xxx while maintaining 28nm process? Charlie's article mentioned that AMD is preparing solution based on current or near-future parts.
 
You need ~150% performance of a 580, not necessarily 2.

And I think you're drastically underestimating the huge gains that will come with console optimization.

I don't understand how you can make that argument. There's no logic in it. You have to have two to cover the extra 50%. I mentioned that before. Those extra FLOPs aren't just going to run themselves. Based on what you are saying they didn't need three 580s since the demo at a resolution 2560x1440 is 4.4 TFLOPs and three 580s is ~4.8 TFLOPs. And from what you're saying here some how a Pitcairn (I'm going with that as a possible target) in a closed environment is going to be at least equal to 1.5 times a 580? I think that's a reach at best.

Now a better question is whether Epic was considering that with there 1.5 TF number. Given it wasn't console based you can argue they were not.

My belief is in a closed box AMD GPU's would, or at least might, better reflect their flop advantage. Regardless now with GCN the difference between AMD and Nvidia flops has drastically narrowed. GK104 might be about the same performance with rumored 3 TF as Tahiti is with 3.8, which is not such a huge difference.

Pitcairn 7870 has 2.56 TF's and I suspect under Epic's guidelines it could run Samaritan at 1080P, and I also believe it is feasible in consoles.

I don't think you realize or acknowledge there is an architectural difference in how AMD/ATi achieves their numbers. One thing I have no problem saying is that there isn't a "FLOP advantage" for AMD. Benchmarks clearly show that. You're essentially ignoring all the benchmarks and saying one AMD GPU is on par with two nVidia GPUs of the same class by looking strictly at the amount of TFLOPs they have. Benchmarks show that a ~3.8 TFLOP Tahiti is not dramatically more powerful than a ~1.6 TFLOP 580. Looking strictly at the numbers that's over 2x the amount of FLOPs and yet the performance difference was nowhere near 2x the amount. Based on current benchmarks, a GK104 should easily beat a Tahiti GPU. You're expecting something that current info clearly don't support.
 
Has there been any talk of memory bandwidth requirements for the samaritan demo or just gflops?
 
Has there been any talk of memory bandwidth requirements for the samaritan demo or just gflops?

I haven't seen a mention of it. The 580 has a BW of 192GB/s though.


On a side note that's the first time I saw the actual quote and had only been repeating what others had said. Seems to me Rein was just being sarcastic and it was never their intent that it could run on just one 580.
 
Some of these modern game engines are using a *lot* of buffers for deferred rendering. What are we looking at in terms of buffers with 4xMSAA with a G-Buffer, full resolution buffer for transparencies, etc?
 
I haven't seen a mention of it. The 580 has a BW of 192GB/s though.


On a side note that's the first time I saw the actual quote and had only been repeating what others had said. Seems to me Rein was just being sarcastic and it was never their intent that it could run on just one 580.
Oh it was Rein, that explains everything, I tend to ignore everything he says since 2005. When the analog xbox360 came out, he said it would be compatible with HDMI, They'd just sell a cable that has the proprietary xbox connector on one end, and HDMI on the other end, just like the analog component cable. Riiiiiiight.
 
Oh it was Rein, that explains everything, I tend to ignore everything he says since 2005. When the analog xbox360 came out, he said it would be compatible with HDMI, They'd just sell a cable that has the proprietary xbox connector on one end, and HDMI on the other end, just like the analog component cable. Riiiiiiight.

I've never really paid much attention to him, but I did notice he has a penchant for grandiosity due to boasting about how they caused MS to spend $1B to increase the memory in 360.
 
Status
Not open for further replies.
Back
Top