The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
well that means the bandwidth has very little to do with the over all results. The shader core as a lot more to do with it. As with many other games.

This also happens in the Far Cry engine 1 and upcoming Crysis engine as well, not to mention Unreal 3 technology too.

Would you mine to elaborate more on this "upcoming Crysis" in your statement?
It sounds as if you are doing the engine yourself or you have studied it already!!
Could you please care to direct me to the link on it, so that I could have a good chance on reading on :cool:
 
doesn't really matter if its single cycle x4 AA ;) if higher settings are used over x4 AA games tend to be more fillrate bound specially games that are shader intensive, like Oblivion and upcoming games. Where is bandwidth used more, HDR, some post render effects, this also effects fillrates not just bandwidth, but if games are fillrate bound as it is, and shaders are getting more complex, pixel fillrates are going to be much more important, they are increasing faster then bandwidth requirements. This is why we have seen the way the nv40, and r5x0's increase in pixel shader power and fillrates over previous generations.

Are game developers just going to not increase pixel shader needs and fillrate needs and shift over to using increased bandwidth needs. If the r600 doesn't come with more pixel shader power and ability to push more pixels to the screen then the g80, I don't see how all that extra bandwidth will come to use. Some of it will yes, but not much, not anywhere near the abosulte limites of the bandwidth increase. There has to be the need to use that much bandwidth to see the full potential of it.
 
well that means the bandwidth has very little to do with the over all results. The shader core as a lot more to do with it. As with many other games.

Well you started off on fillrate and now you're onto the shader core. What makes you think R600 will be lacking in these areas relative to G80? The anecdotal evidence you provided doesn't really support your fillrate argument either - you even pointed out the fact that the GTS pulls away from the XTX with equivalent fillrate and bandwidth.
 
well that means the bandwidth has very little to do with the over all results. The shader core as a lot more to do with it. As with many other games.
Architectures are the sum of all their parts. When comparing different chips and different architectures you have to bear in mind that a whole hots of things can be different - especially when comparing parts with signifcantly differing numbers of transistors that have more dedicated to improving efficiencies all around.
 
doesn't really matter if its single cycle x4 AA ;) if higher settings are used over x4 AA games tend to be more fillrate bound specially games that are shader intensive, like Oblivion and upcoming games.

Heh, don't try to weasel out of that one. You were basing your argument on 4xAA numbers now you want to just ignore that and talk about higher AA settings? Tsk, tsk ;) Exactly what evidence are you using to determine that 8xAA is more fillrate bound than bandwidth bound anyway?

If the r600 doesn't come with more pixel shader power and ability to push more pixels to the screen then the g80, I don't see how all that extra bandwidth will come to use.

Why are you ignoring the fact that g80 suffers a significant performance hit when 4xAA is applied? It's obviously not fillrate related since 4xAA fillrate is supposedly equivalent to 0xAA fillrate so it has to be bandwidth no?
 
So are we considering "extreme resolutions" to equal "anything higher than I personally have" now? :LOL: Should we ask Dell how many 2405 and 2407 they've sold at 1920x1200 in the last two years? Who do you think is buying those? The IGP crowd? :smile: If R600's bw advantage allows it to kick butt at 1920x1200 (with AA/AF, of course), then that it is going to be a signficant help in its ability to move units.

Even my laptop even has 1920x1200. So I don't consider that extreme. Since Dell has sold a boat load of their higher resolution LCD's, as you point out, 2560x1600 is likely to become just "the high-end" before long rather than "the extreme end". Who wouldn't want a high end video card to go with their high-end display and vice versa :) ? High resolutions are where performance gains really matter - there isn't much reason to care that you could now play 1024x768 at 120FPS vs. 90FPS.
 
Would you mine to elaborate more on this "upcoming Crysis" in your statement?
It sounds as if you are doing the engine yourself or you have studied it already!!
Could you please care to direct me to the link on it, so that I could have a good chance on reading on :cool:


Well we are interested in puchasing a license for it, but it won't be available till Crysis is out as of now. I just had some small chats with some of the developers of the engine the past few months about it.

But long story short, Cry engine 2 well shouldn't say the engine the game is much more fillrate bound then previous games.

Another thing is I don't think Crysis is going to go over the 786 buffer size of the g80, they already recommended to use not to go over that size. Its possible they are aiming for 512mb for basic high settings with no AA and AF.
 
Well, I happen to find the review at Digit-life on FarCry research that both x1900XTX and x1950XTX thrumb the 8800GTs on 4xAA and 16xAF at resolution of 2560x1600 and also 1600x1200...
http://www.digit-life.com/articles2/digest3d/1206/itogi-video-fc2-wxp-pcie-aaa.html

And the strange thing is that x1950XTX thrumbs even the 8800GTX on the CoH at the highest resolution (2560x1600) on the same setting!!
http://www.digit-life.com/articles2/digest3d/1206/itogi-video-ch-wxp-pcie-aaa.html

Just wonder what is up on the CoH test as the 8800GTX having more PS and Memory than x1950XTX...

Edit: Argh... Thank for kindly replying on above post :smile:
 
Heh, don't try to weasel out of that one. You were basing your argument on 4xAA numbers now you want to just ignore that and talk about higher AA settings? Tsk, tsk ;) Exactly what evidence are you using to determine that 8xAA is more fillrate bound than bandwidth bound anyway?



Why are you ignoring the fact that g80 suffers a significant performance hit when 4xAA is applied? It's obviously not fillrate related since 4xAA fillrate is supposedly equivalent to 0xAA fillrate so it has to be bandwidth no?

I was basing my arguement as settings are increased ;) not just x4aa

Actually adding in x4 AA what is the % drop? 1-3% I don't concider that that much.
 
Well, I happen to find the review at Digit-life on FarCry research that both x1900XTX and x1950XTX thrumb the 8800GTs on 4xAA and 16xAF at resolution of 2560x1600 and also 1600x1200...
http://www.digit-life.com/articles2/digest3d/1206/itogi-video-fc2-wxp-pcie-aaa.html

And the strange thing is that x1950XTX thrumbs even the 8800GTX on the CoH at the highest resolution (2560x1600) on the same setting!!
http://www.digit-life.com/articles2/digest3d/1206/itogi-video-ch-wxp-pcie-aaa.html

Just wonder what is up on the CoH test as the 8800GTX having more PS and Memory than x1950XTX...

Edit: Argh... Thank for kindly replying on above post :smile:

There has to be something wrong with that article.
How can 2x 8800 GTX in SLI achieve a greater than 100% increase of the frame rate compared with same setup in single GPU mode ?

In many cases, 70~90% is the most realistic gain we can expect.
 
Well, I happen to find the review at Digit-life on FarCry research that both x1900XTX and x1950XTX thrumb the 8800GTs on 4xAA and 16xAF at resolution of 2560x1600 and also 1600x1200...
http://www.digit-life.com/articles2/digest3d/1206/itogi-video-fc2-wxp-pcie-aaa.html

And the strange thing is that x1950XTX thrumbs even the 8800GTX on the CoH at the highest resolution (2560x1600) on the same setting!!
http://www.digit-life.com/articles2/digest3d/1206/itogi-video-ch-wxp-pcie-aaa.html

Just wonder what is up on the CoH test as the 8800GTX having more PS and Memory than x1950XTX...

Edit: Argh... Thank for kindly replying on above post :smile:

NP :)

Actually seems like bugs in the drivers.

Trini, well Dave put it in a better way then I said it, its the sum of the parts that will make the difference, just the raw bandwidth has little to do with it.
 
Actually adding in x4 AA what is the % drop? 1-3% I don't concider that that much.

Nope try 15-30% based on the level of CPU limitation of the 0xAA results.

To be honest I don't think you've really formed a coherent argument so I'm not even sure what you're really saying (or maybe I'm just slow :oops: ). I guess your underlying point is that R600's bandwidth advantage isn't a big deal because G80 isn't bandwidth limited at higher resolutions with AA applied. I've yet to see you provide any evidence to support that hypothesis though. Also, you seem to be assuming that bandwidth will be R600's only advantage ;)
 
Nope try 15-30% based on the level of CPU limitation of the 0xAA results.

To be honest I don't think you've really formed a coherent argument so I'm not even sure what you're really saying (or maybe I'm just slow :oops: ). I guess your underlying point is that R600's bandwidth advantage isn't a big deal because G80 isn't bandwidth limited at higher resolutions with AA applied. I've yet to see you provide any evidence to support that hypothesis though. Also, you seem to be assuming that bandwidth will be R600's only advantage ;)


At this point and time that is the only advantage I see the r600 has with certainty. Can't speculate about things we don't really know, thats why I said the r600 has to have more of the other things too :LOL:

Hmm I don't every remebering any tests with no AA comparied with x4 aa only.

But if you look at that firing squad benchmark set, as the res increases and settings stay the same, fillrates are getting hit harder, specially when you look at the shift from 1920x1200 to 2560x1600.

Look at the Oblivion benchmarks where the GTS has a lead at 1920x1200 and drops to under that of the x1950xtx, bandwidth is equal, but the GTS has slightly lower pixel fillrates.

If most of these games are bandwidth bound to higher degree, Doesn't really matter if the GTS is a better architecture, it will still be bound by the bandwidth.

I remember Level 505 stating wow the res increases the increase in MegaPixels is huge, or something like that, then they attribuited the bandwidth increase to alleviate that. Come on thats so much BS its not funny, the bandwidth increases, but fillrates are the ones that are going up like crazy.


http://www.xbitlabs.com/articles/video/display/gf8800_15.html

found one with no aa and x2 and x4 aa

its not that hard of hit ;)
 
Last edited by a moderator:
But if you look at that firing squad benchmark set, as the res increases and settings stay the same, fillrates are getting hit harder, specially when you look at the shift from 1920x1200 to 2560x1600.

Bandwidth requirements go up with resolution as well....

Look at the Oblivion benchmarks where the GTS has a lead at 1920x1200 and drops to under that of the x1950xtx, bandwidth is equal, but the GTS has slightly lower pixel fillrates.

Somebody correct me if I'm wrong but the GTS (single-cycle) has higher 4xAA fillrate than the XTX (two-cycle).
 
I don't think anyone is arguing that 1920x1200 is going to be faster than 2560x1600 tho!
 
Trini yes bandwidth does go up too but nowhere near the amount of % as fillrates increase

Right now I do have a hard time believing in the r600 is going to come out and have a huge fillrate and shader advantage over the g80, unless those 1-2ghz for vec ALU's are correct. I don't see a 814mhz 64 vec 4 ALU r600 having the shader power, and fillrates (it might have fillrates depending if its 32 or 16, but not the shader power advantage) to fully utilize the increase in bandwidth effeciently. We might see a r600 out perform a g80 by 20% at most? But the increase in bandwidth is 50% higher then the g80.
 
For every CPU limited bench you find I can find 5 that aren't ;) Check out Rage3D's review for some good examples.


the rage one has TAA, thats why I didn't look into that one much.

But in any case lets look at rage, lets take FEAR the GTS is getting hit harder then x1950xtx as the res goes up. Is it bandwidth, if its bandwidth, the hit would be the same

Same with COD2

And with HL2 episode 1
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top