no 6800ultra Extreme, still the same 6800ultra

Status
Not open for further replies.
Well, I'm still holding out hope that nVidia will be able to significantly improve shader performance very shortly, and thus overcome the performance gap.
 
Chalnoth said:
And SM3 is not an intermediate step between generations. It's the basis for the current generation of graphics programming interface, and will be essentially the standard for the next 1.5-2 years.

no. The base will be SM2. Why? Because the vast majority of DX9 cards in peoples systems will be SM2. And, actually, most of them aren't even capable of performing DX8 shaders (most "DX9" cards actually in systems are FX 5200's). Even if ATi did have a SM3 part out now it wouldn't change that.

And, for the record, I am going to be buying a 6800... yes, I'm buying it for SM3. I don't expect it to make a huge difference, but I do want to be able to turn on those few extra features that will be SM3 (probably mostly VS3, actually) only. I am disapointed that ATi does not have VS3 support, but lack of PS3 is not a let doen in the least bit. However, it's definitely not going to be the end of the world as you like to portray it.
 
Chalnoth said:
Well, I'm still holding out hope that nVidia will be able to significantly improve shader performance very shortly, and thus overcome the performance gap.
That sounds way too much like the old, "just wait until the real drivers are out for nVidia's card and you'll really see what it can do!" magic driver line Chal.

Hold off buying it until they come out with the magic drivers would be my best advice if that's what you're feeling. ;)
 
Tahir said:
End of discussion really.
It wasn't that so far, and for all I ever learned about humans, it will never be. At the end of the day, every single argument is utterly pointless but that hasn't stopped people from leading debates on whatever topic. This just happens to be one that's been around too many times and shows no sign of ever coming to an end, is all.
 
anaqer said:
Tahir said:
End of discussion really.
It wasn't that so far, and for all I ever learned about humans, it will never be. At the end of the day, every single argument is utterly pointless but that hasn't stopped people from leading debates on whatever topic. This just happens to be one that's been around too many times and shows no sign of ever coming to an end, is all.

End of discussion for me.. dont see a point in going in circles.
But you baited me and I am in the discussion again.
Therefore you win. :p

I disagree that every single argument is utterly pointless. Most are, some are not. Some arguments as a spectator or active participant you learn something new. However most arguments revolve around the sympton and not the cause. What is the root of the problem? I believe it is simply the fact no one likes to be proven wrong even if they know they are. Massive blow to the ego and all that old chum.. ;)

Reason why it wont come to an end is because it's natural to have a difference of opinion. IMHO of course.
 
Tahir said:
But you baited me and I am in the discussion again. Therefore you win. :p
Hot damn. Risking that I end up looking as if "winning" was of any importance to me with my post... this one just made me actually laugh out loud with happiness. Hell if I know why, never would have thought such a lousy day could possibly end with a smile.

Maybe all is not completely pointless after all. Thus you win. :p 'nite all.
 
Chalnoth said:
I think that was a completely different situation. ATI has released a new architecture without (many) new features. This is really the first time this has happened since 3dfx and the Voodoo3.

This argumentation is just absolutely f'in ridiculous. nVidia released the original GF3 with 1.1 support in the spring of '01. It released FX hardware in the summer of '03, yet because of "mistakes" high-profile developers like Newell stated that they'd treat the lineup as DX8 parts. It's now summer of '04 and nVidia is close to releasing parts that can viably run software beyond 1.1.

Essentially between the spring of '01 and the summer of '04 we're looking at marginal tech. improvements coupled onto speed refreshes and, in your own words, a rather large "mistake" that did little, if anything, to advance the use of tech. by developers. That's over three entire years, which in this industry is quite a bit of time!

So your point is. . . .? <aside from twisting anything you can think of as damning against ATI>
 
Chalnoth said:
For every generation until the R300, nVidia was the first-adopter of a new featureset. They were also the performance leader (starting with the GF 256). Now ATI is trying to get ahead by skimping on features, by rehashing an architecture that is nearly two years old.

And SM3 is not an intermediate step between generations. It's the basis for the current generation of graphics programming interface, and will be essentially the standard for the next 1.5-2 years.

The GF4 was not a revolutionary architecture by any means. Like many have already said, it increased performance over the GF3 Ti line, and very little else. Why would ATI support SM3 when SM2 still has plenty of headroom for development. Do you think the three or so games that are currently out has put an end to SM2's lifespan? If you do your sadly mistaken. Companies don't just drop everything overnight for newer technology, business doesn't work like that. It's a gradual process that takes time and when that time comes the next generation will be upon us (speaking like a preacher aren't I). SM3 is an intermediate step because all it offers are improvments to the directx 9 standard. Just because a patch may be available for a game that adds a few features and optimizes code to run slightly faster doesn't mean it is a necessity to run the game.
 
ANova said:
The GF4 was not a revolutionary architecture by any means. Like many have already said, it increased performance over the GF3 Ti line, and very little else.
Right. And when was it released? About one year after the GF3 line. It's been nearly two years since ATI released the Radeon 9700 Pro.

Why would ATI support SM3 when SM2 still has plenty of headroom for development.
This has always been the case at the juncture of a new architecture. Additionally, it's once SM2 really takes off that we'll start to see noticeable differences between SM2 and SM3 (as it's easier to extend to SM3 once SM2 is fully-supported).

Do you think the three or so games that are currently out has put an end to SM2's lifespan?
I don't think I ever claimed anything like that. But high-end processors should be ahead of the technology curve, not riding it. You can't make games without hardware. I think it'd just be wrong for ATI to ride on nVidia's coattails, keeping the highest-performing products by not improving technology, and then not add in SM3 until nVidia's low-end SM3 hardware is relatively common, and games start to make noticeable use of SM3.
 
Chalnoth said:
digitalwanderer said:
:oops:

Uhm, can I find that just a tad hypocritical sounding considering that nVidia is just now giving SM 2.0 any form of realistic support this generation? :?
No, digi, I don't. They had better than SM2 support. It was just a poor implementation. ATI decided to not support SM3. nVidia failed to support SM2 at high speed. I think there's a huge difference. At least nVidia has been working to fix that mistake.

What Chalnoth said. Ruby (an ATi demo) is proof of that.

It would have been interesing to see NV30 as originally intended, on a Low-K process and clocked around 700mhz.

PS1.1 succeeded over 1.4 simply because it was an industry wide standard both on and off of the PC and set a new baseline.

SM2.0 is just a waypoint on the DX9 map leading to SM3.0. In fact I believe it was initially quite unimportant until nVidia walked out of DX9 discussions and were subsequently unable to get fully to SM3.0 with NV30.
 
radar1200gs said:
It would have been interesing to see NV30 as originally intended, on a Low-K process and clocked around 700mhz.

Unlikely that it could possibly scale to 700MHz. The announced speeds of 500MHz were its targetted clocks (although they wouldn't have gone for the coolers they did).

SM2.0 is just a waypoint on the DX9 map leading to SM3.0.

Wrong. SM3.0 is a way point to 4.0 - its a teaser. Whatever has the widespread support consensus is whats most important and I doubt you'll find Intel supporting SM3.0.
 
radar1200gs said:
SM2.0 is just a waypoint on the DX9 map leading to SM3.0. In fact I believe it was initially quite unimportant until nVidia walked out of DX9 discussions and were subsequently unable to get fully to SM3.0 with NV30.

why is SM2.0 only a waypoint? It looks to me like it's the lowest common denominator and floating point shader baseline. Until we SM3.0 hardware in quantity, which won't be with the 6800 series since they are all relatively high end parts, the vast majority of install base, ATI or Nvidia, will have SM2.0 support when it comes to floating point shaders.

Also, your statement about SM2.0 being unimportant "until Nvidia walked out of DX9 discussions" indicates that you have a very Nvidia-centric viewpoint. However, even had they stayed and NV30 managed full SM3.0 support, the FX5200 (which is still Nvidia's best-selling DX9 part) would still have provided at best severely limited usable DX9 support.
 
Chalnoth said:
Right. And when was it released? About one year after the GF3 line. It's been nearly two years since ATI released the Radeon 9700 Pro.

The timeframe doesn't matter, the point was that nvidia has done the same in the past. The only thing that matters is that SM2 is just now starting to be utilized. There simply is no reason to jump ahead by supporting SM3 atm.

This has always been the case at the juncture of a new architecture. Additionally, it's once SM2 really takes off that we'll start to see noticeable differences between SM2 and SM3 (as it's easier to extend to SM3 once SM2 is fully-supported).

No, we won't see noticable differences as SM3 is a minimal upgrade.

I don't think I ever claimed anything like that. But high-end processors should be ahead of the technology curve, not riding it. You can't make games without hardware. I think it'd just be wrong for ATI to ride on nVidia's coattails, keeping the highest-performing products by not improving technology, and then not add in SM3 until nVidia's low-end SM3 hardware is relatively common, and games start to make noticeable use of SM3.

There is nothing wrong about it, it's called being smart. ATI isn't forcing nvidia to support SM3, they chose to go that path. ATI doesn't believe there is any benefit to it at this point in time. That is their prerogative which they came to the conclusion about after extensive research. If nvidia had decided not to support SM3 that wouldn't have meant the end of progressment in technology; both companies would have supported SM3 in their next product cycles.
 
It would have been interesing to see NV30 as originally intended, on a Low-K process and clocked around 700mhz.
Low-k doesn't give that big of a performance boost, first of all, and have we all forgotten about the God-awful 128-bit memory bus on NV30?
 
The Baron said:
It would have been interesing to see NV30 as originally intended, on a Low-K process and clocked around 700mhz.
Low-k doesn't give that big of a performance boost, first of all, and have we all forgotten about the God-awful 128-bit memory bus on NV30?

Well giving the NV30 the benefit of the doubt. I have seen a 500 Mhz NV30 perform better than 400/850 clocked Nv35's and some cases faster than the 450/850 Ultra cards,


Multitexturing isnt quite as bandwith limited as single texturing and the multi texturing fill rate was there. Hard to say how shader bound ops would been bottlenecked by the memory config. But in my experience with my card. The Huge Memory improvement of my FX 5900 Card hasnt really provided a real benefit to my shader bound titles.


The Nv30's real problem was IMO the integer units still on the core, Which hurt its FP performance.
 
The Baron said:
It would have been interesing to see NV30 as originally intended, on a Low-K process and clocked around 700mhz.
Low-k doesn't give that big of a performance boost, first of all, and have we all forgotten about the God-awful 128-bit memory bus on NV30?
Who's doing Low-K for the performance boost? You do it to lower signal interference at high frequency and enable faster clock speeds (edit: through a lowering of the total heat budget in NV30's case).

The 128 bit bus really doesn't matter if you can run twice the speed of your competitor (you effectively have his 256 bit bus anyway).
 
radar1200gs said:
Who's doing Low-K for the performance boost? You do it to (...) enable faster clock speeds.

uhhhh.. correct me if I'm wrong, but aren't those two generally tied together?
 
Sage said:
radar1200gs said:
Who's doing Low-K for the performance boost? You do it to (...) enable faster clock speeds.

uhhhh.. correct me if I'm wrong, but aren't those two generally tied together?
There are other aspects of Low-K that can help boost performance, I suggest you read some of the links I and others have posted on the forum concerning Low-K.
 
Status
Not open for further replies.
Back
Top