X1800 XT makes the case...

overclocked said:
I would add time to market ala instant launch also in that formula, not in general but comparing just the case with G70/R520.

Definitely; IMO NVIDIA wouldn't have had such an easy stroll if R520 would had been released about the same time and with high enough quantities for the highest model.
 
overclocked said:
I would add time to market ala instant launch also in that formula, not in general but comparing just the case with G70/R520.
Nah, not really. I'm of the opinion that that will work itself out. If you can't buy it, you can't buy it. If ATI can't release the XT in volume, and if the GTX is considered to be better than the best ATI can release in volume, then more people will choose the GTX.

So I'd rather just focus on looking at the technology and performance of the product. Leave whether or not it's available to the people who are actually planning to upgrade (I'll probably have my 6600 GT until sometime late next year, as I just don't have much cash at the moment).
 
Alstrong said:
Should the Fear performance be representative of Doom 3's performance :?:

If you notice, the X1800 sell sheets don't mention Doom 3 performance whatsover. This leads me to believe that G70 will have better performance in this game than R520. Not entirely surprising given that the current NV architecture is almost built around support for this rendering technique.

The only problem is, I wonder how many on-line reviews will do the old "Doom 3 and 3DMark comparison" thing with little or no reference to any of the other possible advantages/benefits of the R520 architecture?

My guess is: lots.
 
Ultra-threading is a major inflection point

The type of scheduler in R520 makes a whole new world of GPUs possible. In much the same way as NVidia's 6xxx series splits the ROPs from the fragment shader pipelines, it brings an entirely new degree of flexibility to an architecture. Only on a far more significant scale.

We know that there's little point in having "more ROPs" because they're constrained by memory bandwidth.

Similarly we can infer that there's little point in having more texture pipes because they're also constrained by memory bandwidth.

But the constraints on shader operations per second (non-texturing) are rather more nebulous. In other words, the more the merrier - Moore's law will provide the on-die memory and arrays of ALUs that provide 2x speed-ups every generation.

With this new type of scheduler we're not only seeing the texture pipes fully decoupled from the shader pipes, but we're also seeing the relative capacities of the two being decoupled.

The first generation of this new scheduler, in RV530, already shows that a 3:1 ratio between shader pipes and texture pipes is viable, with 12 fragments being shaded while 4 un-related fragments are being textured.

A similar scheduler is operating in Xenos and goes one step further in providing that GPU with the means to effectively load-balance vertex and fragment shader work.

Finally, of course, this new type of scheduler allows much smaller batches of fragments to be processed. This will make dynamic branching a viable performance enhancement in fragment shader programs - something that we've not seen so far in SM3 architectures.

So, in summary, the new scheduler is a major inflection point - I'd say as important as the first shader-capable GPU (but hey, I wasn't around then, so what do I know?).

Jawed
 
Ailuros said:
Definitely; IMO NVIDIA wouldn't have had such an easy stroll if R520 would had been released about the same time and with high enough quantities for the highest model.

Sure i agree with that. But at the same you could get SLI also, then theres brand loyality and many other factors but we on the other hand really dont know how fast it is yet, feature-wise its great.

Chalnoth said:
Nah, not really. I'm of the opinion that that will work itself out. If you can't buy it, you can't buy it. If ATI can't release the XT in volume, and if the GTX is considered to be better than the best ATI can release in volume, then more people will choose the GTX.

So I'd rather just focus on looking at the technology and performance of the product. Leave whether or not it's available to the people who are actually planning to upgrade (I'll probably have my 6600 GT until sometime late next year, as I just don't have much cash at the moment).

Well i agree with this also as im self not going to buy any new computer untill Vista.
I took what you wrote in your list more as i said "general" in this case and its always easy looking back and se what the "right" choise would been.

I think esp the 512 and also the 256 versions of the X1800XT pricing IS very strange because if you have that performance advantage ATI is suggesting plus the fact of expensive FAST memory i take this as an indication of low volume parts, but i could be wrong. But that bit really striked from when i first read the ATI pappers.
 
overclocked said:
Sure i agree with that. But at the same you could get SLI also, then theres brand loyality and many other factors but we on the other hand really dont know how fast it is yet, feature-wise its great.

Multi-GPU configs are an entire story of their own; even a high end gamer will consider the extra cost for such a system. While there are definitely customers for such systems, the market share is also diametrically smaller than the remaining high end segment of the market.

As for the feature-set I count this far the ability to combine 64bpp HDR + MSAA and the new less angle dependent AF mode. Of course is there also adaptive AA, but since it's there on competitive products I won't count it as an advantage. All the remaining aspects - possible advantages and disadvantages - when it comes to features/functionalities are mainly of developer interest.
 
Ailuros said:
Multi-GPU configs are an entire story of their own; even a high end gamer will consider the extra cost for such a system. While there are definitely customers for such systems, the market share is also diametrically smaller than the remaining high end segment of the market.

As for the feature-set I count this far the ability to combine 64bpp HDR + MSAA and the new less angle dependent AF mode. Of course is there also adaptive AA, but since it's there on competitive products I won't count it as an advantage. All the remaining aspects - possible advantages and disadvantages - when it comes to features/functionalities are mainly of developer interest.

Well i think your right but in that perspective, that we know the performance and/or hit with AA+FP16HDR. But for now we dont know but soon we will.
There also a couple of more things you could add as noise,power,heat and some may find it impracical with a dualslot heatsink, the list goes on.
 
We dont know noise, heat or power. Dual slot is a given, though I think most of the people who will be buying these cards really doesnt care about dual slot unless its a shuttle or similair system. Also, I personally much rather have a dual slot if it moves the hot air out of my case, something I really dislike about the 7800GTX cooler is that it just recirculates the hot air around in the case, that's also why I like Nv/ATI Silencer's over say a Zalman VGA cooler.
 
Skrying said:
We dont know noise, heat or power. Dual slot is a given, though I think most of the people who will be buying these cards really doesnt care about dual slot unless its a shuttle or similair system. Also, I personally much rather have a dual slot if it moves the hot air out of my case, something I really dislike about the 7800GTX cooler is that it just recirculates the hot air around in the case, that's also why I like Nv/ATI Silencer's over say a Zalman VGA cooler.

Yep, don't care about dual-slot either since I'll only have a single card anyway. And I agree that exhausting out the case is best - I miss my Silencer.
 
Chalnoth said:
I'm just taking this from the perspective of a scientist. You really have to have independent verification to be sure of the results of any experiement.

David Kirk? ;)
 
Back
Top