R480=x850
R520=1800
R580=1900(1950)
Actually, wasn't the R580+ the 1950?
R480=x850
R520=1800
R580=1900(1950)
If you just reduce the width of the texel data paths and filter units, I don't think it would be that much cheaper. Splittable filter units might make sense. You'd also have to cut a bit from the texture address units.For today's workload I agree. Personally I'm a fan of the idea of having some extra cheap units that can only sample say RGBA8 and below. Should be much cheaper than equipping the hardware with additional fullblown texture units. But then I'm not a hardware guy so I don't know how feasible that would be.
It's useful if you want some unfiltered samples, but writing a shader with a mix of Sample() and Load() + bilinear/trilinear filter seems awkward if all you really want is filtered samples.It should be noted though that theorethically the R600 could up to double the TEX rate in DX10 if you have a good mix of Load() and Sample() calls since the Load() calls could be implemented with vertex fetch instructions.
Sure, but even with past experiences (which may differ a lot for different people) it remains a gamble.The average customer would just use the past as a reference. Like "my X1900 lasted my two years while my buddy had to upgrade his 7800 after a year", or something along those lines.
We seem to be talking about two different things. You appear to be citing features, while I'm talking about alu:tex ratios.
If you just reduce the width of the texel data paths and filter units, I don't think it would be that much cheaper.
It's useful if you want some unfiltered samples, but writing a shader with a mix of Sample() and Load() + bilinear/trilinear filter seems awkward if all you really want is filtered samples.
The argumentation is bull, since the cheap bunch will also wait a year till the games land in the bargain bin, thus they'll also be ok within their own timeline.
A lot of comments on the last few pages of the thread have talked about gamers on a budget much the same was as you would someone who lives in a foriegn country who's customs and culture you really don't understand in the slightest. Speaking as one of those people on a budget, comments like the quoted text are infuriating.
So you talk for the whole world? Nice
Saying "comments like this are infuriating" is hardly "stating an opinion" in my book.
Sure it is as it's a personal statement and not speaking for everyone.Saying "comments like this are infuriating" is hardly "stating an opinion" in my book.
If you call piling on TMUs and ROPs clever, technologically or architecturally, then I guess you have an argument.
Jawed
Actually, its farily easy to see what other effects our Vantage improvements had - generally they benefitted all DX10 apps.I don't really trust 3DMark as a target because driver devs tend to tweak compiler heuristics towards those workloads (I'm not saying driver detection, I'm saying, you compile, look at the output, and then go back and adjust your heuristics so that the output is better. This is not guaranteed to avoid regressions on other workloads. I do compiler work on a regular basis, and this is what we do, compile popular apps, and tweak)
I don't think NV to where they are today by being too stupid (NV3x aside), so there must be a reason behind the decisions made for GT200 that we're not seeing yet. NVidia loves high margin chips, and they clearly know how the yield calculus works out.
I remember there was a time when people were salivating over Fast14 and ATI running way ahead of NV in ALU clocks.