Usefulness of 512MB vram with X1900

Just fyi, no game should ever stutter due to too little texture memory. Stuttering isn't just caused by swapping textures in and out: it's caused by inefficient texture caching algorithms. I've seen stutters come and go over many driver revisions over the years. It shouldn't ever happen. But it does.

The only thing you should ever have to worry about with having not enough memory is slightly lower framerates. But, unfortunately, we get stuttering that comes and goes.
 
neliz said:
Texture swapping is what will bring any card stuttering in D3 and Q4 at "ultra quality" (I think the box or manual clearly states that 512mb graphics memory is required)
But who needs to run any D3 engine game at ultra quality, anyway? Turning off compression is just silly given the diffrence is virtually unoticeable. Regardless, a 512MB card actually makes virtually no difference in D3, even in "ultra quality" mode - see http://www.beyond3d.com/reviews/sapphire/512/index.php?p=14

image022.gif


I very much doubt a 512MB card is going to make much difference in current games, especially on a PICe x16 card with sufficient system RAM. If you've got money to burn then go for it, otherwise if you are a normal, sane person then there is little point at the moment.
 
Diplo said:
But who needs to run any D3 engine game at ultra quality, anyway? Turning off compression is just silly given the diffrence is virtually unoticeable. Regardless, a 512MB card actually makes virtually no difference in D3, even in "ultra quality" mode - see http://www.beyond3d.com/reviews/sapphire/512/index.php?p=14

I very much doubt a 512MB card is going to make much difference in current games, especially on a PICe x16 card with sufficient system RAM. If you've got money to burn then go for it, otherwise if you are a normal, sane person then there is little point at the moment.

From the same review, HL2 1600x1200 6xAA 16xAF
image018.gif


And this isn't the only example showing this, though it's only in this particular review.
 
Chalnoth said:
I don't think that had anything to do with the amount of memory, but rather an issue with Hyper-Z.
So how come the X800 XL with 512 MiB has that issue fixed while the one with 256MiB hasn't?
 
Well, if my supposition is correct, it would mean one of two things:
1. The hierarchical-Z data is stored in video memory, and since the 512 has more, ATI felt it could be made bigger.
2. In a later revision, ATI fixed a performance bug.

Anyway, the reason why I don't think it's memory space (unless there's some really bad texture management going on) is just due to the memory requirements of those resolutions. 1280x1024x32 w/ 6x AA requires about 68MB of RAM. 1600x1200x32 w/ 6x AA requires about 95MB of RAM. On a 256MB board, that's not a whole lot of difference.

Now, in a worst-case scenario (which is probably unlikely), one would need to transfer an additional 27MB of texture data across the AGP bus each frame. Since the X800XL 256MB was only running at 20fps, this would require 540MB/sec of texture data (or, if you go with the 512's 33fps, you're at almost 900MB/sec). AGP 8x should be capable of 2.1GB/sec of bandwidth. So with a good memory management algorithm, you shouldn't see such tremendous dips in framerate. Oh, and don't forget that at these low framerates, all other traffic across the AGP bus is going to be suppressed very significantly.
 
ANova said:
Well FEAR stutters horribly on my 256MB X850 XT with texture settings on high, so there seems to already be a game or two that can make use of 512 MB.

What kind of memory does your system have? I have a 256 meg 7800GTX SLI setup and I dont stutter. Back when I only had 1 gigabyte of system memory I did. But after upgrading to 2 gigs it went away.
 
Chal, #1 sounds like memory amount, and #2 doesn't seem to correspond to other tests. We don't see the same split at those high-res, high-AA settings in any other bench but the apparently texture-heavy HL2 (4xAA & 6xAA)and D3 "Turkey Baster" demo (here, too). Seems likely that "amount of memory" is the issue. Pity B3D hasn't run HierZ tests on the 256 and 512MB R430s to make it conclusive, but that test does show R520 didn't gain anything on R420. You'd think if hierZ improvements were made to R430, coming as it did after R420, they would've carried onto R520 and be evident in the 16x12 res all of these benches top out at.

It'd be odd if HyperZ was the issue at 16x12, b/c I thought the X800 series had improved it to fit up to 19x10 ("HD") and to gracefully lose performance at resolutions above that. Besides, both 256MB and 512MB versions of the X800XL used the same R430 GPU, no?

And the GTX-512's extra RAM sure seems to be responsible for its super-linear framerate gains in BF2 compared to the GTX-256 (although 19x12 4xAA is outside the OP's range). No chip revision there that could explain a framerate differential greater than the core or mem clock differentials.

Ail's "Memory Consumption in Recent Games" seems appropriate. :smile: Also, HW.fr's contribution.
 
Last edited by a moderator:
Kaotik said:
Actually we have such games already I believe, I remember seeing several benches of X800XL 512MB where it was beating X850XT 256MB, and about on par with 7800GTX 256MB

In pre-release demos? Do the same differences hold after the final game was tested?
 
MistaPi said:
Its not that obvious for me Im afraid. :) To be clear, and sorry if I am nagging, but you don’t think that those games wont demand to much at full detail levels for say 1280x1024 with atleast some AA and AF that 512MB vram wont matter much anyway. That is you have to go to a lower setting that doesn’t demand that much memory.

If you intend to keep a high end GPU for more than a year from today, I find it hard to believe that future games will not benefit from 512MB ram. As already mentioned games based on the UE3 engine would be one example already.
 
Pete said:
Chal, #1 sounds like memory amount, and #2 doesn't seem to correspond to other tests. We don't see the same split at those high-res, high-AA settings in any other bench but the apparently texture-heavy HL2 (4xAA & 6xAA)and D3 "Turkey Baster" demo (here, too). Seems likely that "amount of memory" is the issue. Pity B3D hasn't run HierZ tests on the 256 and 512MB R430s to make it conclusive, but that test does show R520 didn't gain anything on R420. You'd think if hierZ improvements were made to R430, coming as it did after R420, they would've carried onto R520 and be evident in the 16x12 res all of these benches top out at.

It'd be odd if HyperZ was the issue at 16x12, b/c I thought the X800 series had improved it to fit up to 19x10 ("HD") and to gracefully lose performance at resolutions above that. Besides, both 256MB and 512MB versions of the X800XL used the same R430 GPU, no?

And the GTX-512's extra RAM sure seems to be responsible for its super-linear framerate gains in BF2 compared to the GTX-256 (although 19x12 4xAA is outside the OP's range). No chip revision there that could explain a framerate differential greater than the core or mem clock differentials.

Ail's "Memory Consumption in Recent Games" seems appropriate. :smile: Also, HW.fr's contribution.


Ahhhh someone actually read it ;)

Just a small note considering Bf2 and the GTX512 vs. GTX256: amount of memory is part of the reason, along with a lot higher bandwidth and some higher processing power.
 
ChrisRay said:
What kind of memory does your system have? I have a 256 meg 7800GTX SLI setup and I dont stutter. Back when I only had 1 gigabyte of system memory I did. But after upgrading to 2 gigs it went away.
Yeah I have 1 GB. Actually, I have 1.5 GB but I removed the 512 because it resulted in system instability when overclocked and they also got quite warm.
 
Ailuros said:
In pre-release demos? Do the same differences hold after the final game was tested?

I believe the tests were on released games, however I'm not 100% sure of it so I'll try to dig 'em up somewhere to check it.
 
Kaotik said:
I believe the tests were on released games, however I'm not 100% sure of it so I'll try to dig 'em up somewhere to check it.

I'm just asking because I recall tests from the CoD2 pre-release demo.
 
Chalnoth said:
Well, if my supposition is correct, it would mean one of two things:
1. The hierarchical-Z data is stored in video memory, and since the 512 has more, ATI felt it could be made bigger.
2. In a later revision, ATI fixed a performance bug.
Hierarchical Z data is kept on-chip. However, you don't even mention any reason why it should be related to Hyper-Z.

Anyway, the reason why I don't think it's memory space (unless there's some really bad texture management going on) is just due to the memory requirements of those resolutions. 1280x1024x32 w/ 6x AA requires about 68MB of RAM. 1600x1200x32 w/ 6x AA requires about 95MB of RAM. On a 256MB board, that's not a whole lot of difference.
1280x1024 takes 70MiB, 1600x1200 about 102.5. You might not want to downsample to the front buffer.

Now, in a worst-case scenario (which is probably unlikely), one would need to transfer an additional 27MB of texture data across the AGP bus each frame. Since the X800XL 256MB was only running at 20fps, this would require 540MB/sec of texture data (or, if you go with the 512's 33fps, you're at almost 900MB/sec). AGP 8x should be capable of 2.1GB/sec of bandwidth. So with a good memory management algorithm, you shouldn't see such tremendous dips in framerate. Oh, and don't forget that at these low framerates, all other traffic across the AGP bus is going to be suppressed very significantly.
27MiB (or 32.5MiB, whatever) is not the worst case because the order you need the textures in matters. If you have three textures, A, B and C which you need in that order, but only two fit into texture memory, you need to overwrite two textures per frame (except for the first frame).

Texturing directly from main memory has other severe drawbacks.

Btw, AGP texture download bandwidth for some ATI cards:
http://www.beyond3d.com/reviews/ati/r423/index.php?p=16
 
Well, I suppose it could be due to the AGP texturing, then, but this seems to me like it's just stating that loading in textures from system memory is far, far less optimal than it should be. Hopefully nVidia's work with TurboCache will help this somewhat.
 
Doesn't 3Dmark06 benefit from having 512MB of VRAM?

Other than that, yeah, I guess, like Ailuros said, that when 512MB will be "required" better GPUs will be there to take advantage of this larger amount of VRAM.
 
ANova said:
Yeah I have 1 GB. Actually, I have 1.5 GB but I removed the 512 because it resulted in system instability when overclocked and they also got quite warm.


Sorry for the lateness. But I am assuming the stuttering you have been experiencing has been between room transistion, turns, and scripted sequences? I noticed that regardless of my graphical settings ((I tried running the game @ 1156x864 @ 2xAA/8xAF)) that this stuttering still occured. So I just gave up on the game until I got more system memory. The framebuffer consumption at the settings I mentioned isnt that significant. I doubt it could have accounted for what I was experiencing. Of course putting in 2 gigs of memory allowed me to play @ 1600x1200 with 4xAA/8xAF without the stuttering and hitching I mentioned above.

Chris :)
 
Well I just set it to medium and it played fine with everything on max detail except softshadows. I didn't try lowering the resolution to see if the stuttering was still present, but I do remember now that it affected my framerate quite badly.
 
Back
Top