Nvidia ATI Scrimmage first results

sethk

Newcomer
Totally not trolling here, but here are my (IMO) impartial results from what we've seen so far, now that the X800Pro / XT results are out.

Looks like the 520mhz core / 1120 mem X800XT is faster than the 6800U in many cases, especially when 8x AF is turned on. It doesn't blow it out of the water, in fact given the closeness of results, I think these factors might be more important in considering the early purchase (say by early July) of either of these products:

- Unless AIB vendors come up with real 1 slot 1 connector 6800U designs, the X800XT with lower power / cooling requirements will attract those for whom this is an issue.

- People who think Shader Model 3.0 is a big deal might choose an Nvidia solution because of that.

- Nvidia and ATI might wrangle over model / spec positioning, with a value leader taking a good chunk of the sales. If a 16 pipeline product was availble for $300 by July I'd buy that, whichever company was selling that card. Nvidia is slightly underperforming the X800XT at 400mhz vs 520mhz, but at say 475 core and up, they'd probably be ahead by a hair. This might be another avenue for these two companies to compete.

Because of their last generation product Nvidia is under intense scrutiny for IQ, so I hope they are able to keep IQ good while bumping up the performance either through spec or driver updgrades.

In either case this looks like a competitive generation, and I hope that means that prices will fall fast this time around. That in addition to the overall good performance of both these architectures is good news for us consumers.
 
I love the way people always make it sound so easy to bump up the clocks. If the chips were reliable at that clock, do you think nVidia WOULDN'T have shipped it at that clock?

Don't get your hopes up.
 
http://www.theinquirer.net/?article=15732

System Builder Summit 2004

The general feeling about the ATI X800XT chip is that it scored a blow to Nvidia with this one.

Mysteriously Nvidia and none of its partners are here. Nvidia said us in Geneva that it wouldn’t be at this event since it had its Nvision in Sardinia last week and none of their partners decided to show up at this rather prestigious Gartner gathering.

ATI’s partners did show up as Sapphire, FIC, TUL - ex Power color and Gigacube where spotted at the scene yesterday. µ
 
Worth pointing out...

Alienware reality:

1. Price gap between X800Pro and 6800Ultra cards = +$235
2. If choose 6800Ultra prompted to upgrade to 650W PS = +222
2. X800Pro is the 'recommended' card.
3. 6800Ultra still has 'limited availability' even though launched Apr 14.
4. No 6800Pro or 6800GT..probably for a while

Nvidia could have great technology but if they don't get yield, availability, pricing, and power right.. there could be big problem. On this last point we know that 6800GT is supposed to have lower power requirements and one connector but when are we going to see it???
 
Re: Worth pointing out...

Qaz said:
Alienware reality:

1. Price gap between X800Pro and 6800Ultra cards = +$235
2. If choose 6800Ultra prompted to upgrade to 650W PS = +222
2. X800Pro is the 'recommended' card.
3. 6800Ultra still has 'limited availability' even though launched Apr 14.
4. No 6800Pro or 6800GT..probably for a while

Nvidia could have great technology but if they don't get yield, availability, pricing, and power right.. there could be big problem. On this last point we know that 6800GT is supposed to have lower power requirements and one connector but when are we going to see it???

Good question some people are saying it is a 16 Pipe part is this true???

If it is it may be a better by than the x800 pro but since you can only run it on the 61.11 which IMHO are not yet proven I am keeping an open mind.
 
Nvidia ATi Scrimmage

GeForce 6800 GT 16 pipes?

...Yes, review sites that got the card indicate 16 pipes but core clocked down to 350MHz.

...So I think availabity could be an issue here if 16-pipe yields don't increase as fast as expected (and plenty of IBM rumours floating around).
 
If I were to buy a card now, I'd think I'd go ATI, though a 6800 wouldn't be too far behind.

For the most part, the PS3.0 features of th Nv40 core don't interest me too much, mainly because I'm a pessimist about the rate of adoption on the part of software developers, and I am leery of buying a first iteration of any new model.

By the time any significant use of the new model comes out, I will have been better off buying a next gen card, with better performance and probably fewer issues to boot.

That, and since both high end cards are CPU limited so much, I'd probably wait until better processors came out.
 
In response to the person who thought I was oversimplifying the upping of the mhz projection:

- HAHA I told you so.

Sorry, but I couldn't resist. As some sites have reported by now, there will in fact be a super ultra / golden sample / AT called in a 6850U version that adds 50mhz to the core, giving you a 450mhz / 1100mhz card for the max retail price of $499. As it turns out the x800xt STILL wins in some benchmarks. Uh oh.

For people who think the PS2.0 won't matter over the life of the card. You may be right, but then again a lot of people kept their 9700 Pros for 2+ years since they bought it.
The X800XT has BARELY more features than this now 3 year old card. If you plan to keep this one for 3 years or even 2 years, you will be be behind the curve. It will be like owning a 8500now, sure you can play shader model 1.x vs 2.0 (never mind that it's slower anyways) but it also looks worse. Is the difference between 2.x and 3.x the same as that between 1.x and 2.x? Probably not according to most people, but it's enough to plant a seed of doubt. This will grow increasingly true over time. 6 months from now, if you bought a x800 based part, you'd be wondering even more if you are buying a soon to be obsolete part, especially if ATI announces their upcoming next gen part that will clearly be the one with the major new features.

Personally I don't think this will work against the x800 because developers will support it just as well as ps 3.0 for the next few years (I would hope), but you can be sure nvidia PR will be hammering away at SM 3.0, building the seeds of doubts in the minds of the doubt-prone.

I also wonder if the programmable sample pattern that allows the temporal AA that ATI has as its other major new feature is unique to the x800. I could have sworn that one of countless previews / reviews of the x800/nv40 said that both the r300 and the nv40 can also do this, so could there a mysteriously similarly named new AA mode in Nvidia's future? Will ATI enable it on the R300? Will be interesting to see how this generation pans out.

Standard disclaimer: Im not an evangelist for either IHV, but I do find this GPU war interesting.
 
I've said it before. ATI launches card ATI sells card. Not just a paper launch like NV. They've had almost a whole month to start selling there cards. Still nothing.
 
It'd be nice if PS3.0 support is good enough on NV40 that it will be viable for when the shaders are actually widely used. However, I doubt any games that use PS3.0 in large amounts will be out any time soon, and when they do I have a feeling NV40 won't be fast enough anyway.

Nv45, might be.

First implementations usually have issues anyway.

Not that NV40's support isn't necessary, as somebody had to bite the bullet and kick-start development, but the real winners for Nvidia's being first are going to be the N45, NV50 and R500 cores.
 
Sxotty said:
digitalwanderer said:
Two words; "retail availability" 8)

SOLD OUT :p

Two more: Best Buy

Hardocp is reporting that Best Buy has X800pro cards coming in today. They are also taking preorder for XTPE cards, but only say they will ship them when they get them.
 
"It'd be nice if PS3.0 support is good enough on NV40 that it will be viable for when the shaders are actually widely used. However, I doubt any games that use PS3.0 in large amounts will be out any time soon, and when they do I have a feeling NV40 won't be fast enough anyway."

Actually from developer responses to what they plan to do with SM3.0 most of the answers seemed to indicate that they'll be using SM3.0 to optmize shader speed, not to change the output as much (the displacement mapping is an exception.) For example a touted use for branching is to exit a shader early when the remaining ops don't apply to the current shader, effectively shortening execution time for that pixel.

So even when things are using SM3.0 they might not be improving IQ just producing the same IQ faster on that chip (not to say that NV40 with PS3.0 optimizations will be faster than a higher clocked X800) but who knows.
 
Quitch said:
I love the way people always make it sound so easy to bump up the clocks. If the chips were reliable at that clock, do you think nVidia WOULDN'T have shipped it at that clock?
Why wouldn't they? Well, if they thought the ATI offering was going to be significantly slower than their offering at that clock speed, why would they? It would cost them more, as yields would be lower, and they'd still have the fastest card. Keeping a bit back in reserve seems sound practice to me - but it will cost them.
 
sethk said:
"It'd be nice if PS3.0 support is good enough on NV40 that it will be viable for when the shaders are actually widely used. However, I doubt any games that use PS3.0 in large amounts will be out any time soon, and when they do I have a feeling NV40 won't be fast enough anyway."

Actually from developer responses to what they plan to do with SM3.0 most of the answers seemed to indicate that they'll be using SM3.0 to optmize shader speed, not to change the output as much (the displacement mapping is an exception.) For example a touted use for branching is to exit a shader early when the remaining ops don't apply to the current shader, effectively shortening execution time for that pixel.

So even when things are using SM3.0 they might not be improving IQ just producing the same IQ faster on that chip (not to say that NV40 with PS3.0 optimizations will be faster than a higher clocked X800) but who knows.

Odds are though that the future games that don't just paste on one or two PS3.0 shaders are going to be massively more demanding than what Nv40 can handle.

PS3.0 might speed the card up a little, but it won't matter if the game has triple the hardware requirements. Considering the long lag time in development, we might not see real usage for several hardware generations.
 
*Clearly* NV agrees that X800PE is a faster card than 6800U. There is no other rational way to interpret their decision to release the 6800 Extreme, or whatever they're going to call it.

Does "faster" = "better"? I find that an uninteresting discussion, actually, certainly at the macro level, and will leave it to others.
 
Looks like the X800pro/XT were enough to make MSI come clean out of the closet. MSI is nVidias number one add in board partener.

MSI unveils its new ATI â„¢ RADEON â„¢ based graphics cards en par with ATI's launch of its RADEONâ„¢ X800XT and X800 PRO graphics processor units
image001.jpg


“MSI believes in the importance and value of relationships, which is why it partners with industry leaders like ATI to bring the leading graphics solutions.â€￾ said Calvin Wu , CEO of MSI Computer Corp., a subsidiary of MSI. “When it came time to look at the real-world graphics performance, the RADEONâ„¢ remains a popular choice with many gamers. At MSI, we are always pushing the boundaries of performance and consumer value, and ATI's solutions give our customers a graphics-rich environment while remaining cost effective.â€￾

http://www.msicomputer.com/pressrelease/rx800.asp
 
Back
Top