Definitive answer sought: 128mb immediately beneficial or no

Above

Newcomer
The usual postulation is that extra memory on videocards is a small expense to insure for longevity of a videocard, even if no current improvement is measureable. However that extra money could be put towards buying another videocard months sooner. Which would be more satisfying? You take your pick on that one.
Extra memory can enable higher resolutions and deeper fsaa but who knows how often they will be playable anyway.

What I would really like to know is whether the extra memory would help avoid stutters in many games. When a game sticks for fractions of a second, one tends to think of some heavy transfer going on. I happen to think that some games running at relatively low framerates would be made considerably more tolerable if those stutters were removed. Can extra memory do that? I would expect 128mb to be enough for anything around now, so long as once textures are loaded into the memory they are recognized as being there and not stupidly reloaded.

Does anyone have any hard evidence like graphs on the subject? Thanks. I have seen a few times people questioning the worth of 128mb but never seen an answer that looked very informed. I thought that if anyone could answer it, it would be beyond3d.
 
I dun have graphs, but I can say a couple of things:

With GF4's rather low performance hit with AA, 1600x1200 + 4x AA is only possible with 128MB and the Ti4600 is actually capable of running a good many games at that level.

Both the Ti4200 and Radeon 8500, available at both 64MB and 128MB, have a number of results showing that 128MB is slightly faster even at lower clock speeds, and numerous gamers have noticed that in some games like RtCW, the 128MB variants don't stutter in the places that 64MB cards do.
 
Well if you're looking at future games then the codecreatures benchmark scores twice as high on 128Mb 8500s and ti4200s than their 64Mb counterparts.
 
if you look at any current game going to 128mb cards only helps with being able to run higher resolutions with AA

but if we look at upcoming/future games, such as UT 2003, Doom3 etc... 128MB will probably benefit them compared to a 64MB card, and maybe 256MB will help more then 128MB, such evidence of this is in the Codecreatures benchmark demo as noted above...
 
I wish people would not point to those unreleased benchmarks. They are not available to public scrutiny and the results thus are liable to corruption. We already have real games and demos, let us use them instead. This thread was supposed to be about the current
 
Geeforcer said:
Not exactly apples to apples, considering the clock difference. It would make more sence to compare Radeon 8500 64MB and 128MB version.

I know that's why I put in the smiley :)

But the message is there the higher memory clock seems to be much more important than the memory size. Which likely means that those benches doesn't benefit from 128MB.

IMHO, this is one of the questions people facing: "Should I buy the 64MB T4200 or the 128MB one?" So far, it looks like the 64MB is the better buy (both cheaper and faster).

What we'd need is some benchmarks with AA enabled...
 
Above said:
I wish people would not point to those unreleased benchmarks. They are not available to public scrutiny and the results thus are liable to corruption. We already have real games and demos, let us use them instead. This thread was supposed to be about the current

In this case the answer is a definite no.
 
128mb or 64?

disable texture compression for full IQ afficiendos...then you'll need the extra memory I would say. But hardly many people do that since it requires often to edit an ini file of the game engine.
 
The difference that texture compression has on local memory usage is of less a concern that the affect the setting has on the memory bandwidth hit. It's a little difficult to differentiate between the two effects when trying to see if there is any immediate advantage of 128MB over 64MB - for the most part, there is no "difference" other than the former will permit the best cards to run at something like 1600 x 1200 @ 32-bit with 4x SS-FSAA on: the buffer for the AA would be 29MB, the back and front buffers would be 7.3MB each, with the z-buffer probably being around 5.3MB...a total of 48.9MB. A 64MB card would have 15MB left for everything else and would almost certainly result in texture swapping over the AGP bus in a newer game.
 
What I thought would prove the most pertinent graph, of framerate over time in a Giants demo at 1600x1200, shows little. http://www.digit-life.com/articles/digest3d/0302/itogi-video-giants-min.html
apart from a 4 fps increase in many places.
Do demos run smoother the second loop, or am I just imagining it? That is, all data should be loaded already.
Most informative so far was the news about RTCW stutters.
I suppose it is hard to gather experiences about this issue because it does not necessarily lend itself to statistics. But of all the websites out there, I am sure someone has both 64mb and 128mb to compare experiences. I guess they prefer the easy job of running benchmarks for a few minutes and pronouncing themselves a guru.
 
Back
Top