R500/NV50

DaveBaumann said:
stevem said:
I kinda wondered why Ati would bother with a re-write of their OGL ICD for current parts (apart from the obvious), if a new architecture was scheduled for R520. Xbox2 & Longhorn probably indicate the timeframe for their next core based on R400 type tech.

The best I can fathom, as I mentioned here that that the next core is actually and extension of the R300 line. This also corresponds with the comment I heard that the OGL rewrite wouldn't be available "for a couple of product releases". It seems like the rewrite is actually targetting R600, which, if I'm correct, will be the first PC product to be based on the unified shader architecture. They should be able to prototype the drivers on similar silicon as they will have the XBox chips back, which (again, if I'm correct) will be the platform R600 is developed from.

Architectural Lines:
R300 --> R420 --> R520

R400 (not released) --> R500 (XBox) --> R600 (PC / Longhorn)

SO which is the more powerfull chip, The PC's R520 or the XBox2's R500?
 
kenneth9265_3 said:
SO which is the more powerfull chip, The PC's R520 or the XBox2's R500?

My guess is that its "horses for courses" -- although they will be able to do similar things the requirement emphasis will be slightly different. On the PC the graphics will need to cope with a wider variety of 3D feature utiisation, while on the XBox the games will have been designed to a set specification with a clear emphasis on "shader instructions per pixel" (hence the shader rate is more important than the pure pixel fill-rate).
 
Humus said:
chavvdarrr said:
no, its great :)

I don't see what's so great about killing innovation and competition, cause that's what it's going to do if they end up doing it that way. The IHVs will have no choice but to implement the exact functionality that Microsoft think is good, rather than thinking for themselves. MS becomes the industry dictator, while IHVs end up building so similar products that basically only the brand name will differ.

Does Microsoft take input from companies like Nvidia and ATI and take that into consideration when plotting the course DirectX will take?

And what about OpenGL? Nvidia and ATI would still be able to add other features not supported by DirectX and make them available via extensions, with Nvidia's UltraShadow being a current example (unless of course I'm mistaken and it can be used in DirectX)
 
Lezmaka said:
Does Microsoft take input from companies like Nvidia and ATI and take that into consideration when plotting the course DirectX will take?

Yes -- you can see it in the way DX has been developed until now. However, they also force the industry's hand somewhat, which is probably why NVIDIA even supports shaders 1.4. Seems to me like Microsoft wants to do some more forcing, because IHVs have traditionally deliberately avoided each other's features.

On one hand, I think that it's a good idea to set a new baseline, to remove most of the current caps (a lot will be removed anyway when dropping the fixed function pipeline, and dropping early shader models is a good idea IMO). From a developer POV, I can definitely see such a baseline as a very good thing. (Even though on the down side it would mean that anyone wanting to write for older cards will have to use DX9, and that'd likely limit DX10 use.)

On the other hand, I agree with Humus that not allowing new options will prevent companies from differentiating their products except by performance. Perhaps future version of DX will allow more differentiation after the baseline is set, assuming it's actually required. It's possible that a general enough programming model will make further development of new features rather less needed than it is now. The way I see it, that's what DX10 is aiming at.
 
ET said:
Lezmaka said:
On the other hand, I agree with Humus that not allowing new options will prevent companies from differentiating their products except by performance. Perhaps future version of DX will allow more differentiation after the baseline is set, assuming it's actually required. It's possible that a general enough programming model will make further development of new features rather less needed than it is now. The way I see it, that's what DX10 is aiming at.

I see this an an excellent opportunity for OpenGL to get back some market share. If they can pull the finger out...
 
ET said:
Yes -- you can see it in the way DX has been developed until now. However, they also force the industry's hand somewhat, which is probably why NVIDIA even supports shaders 1.4. Seems to me like Microsoft wants to do some more forcing, because IHVs have traditionally deliberately avoided each other's features.
Just a nitpick, but I really think that NVidia's support for PS1.4 is simply a side-effect of supporting PS2.0. If you support the latter, it's probably not all that much work to emulate the former.
 
Ostsol said:
Just a nitpick, but I really think that NVidia's support for PS1.4 is simply a side-effect of supporting PS2.0. If you support the latter, it's probably not all that much work to emulate the former.
But I still believe they wouldn't have done it if they didn't need to. Any little bit to help kill PS1.4, you know... (I'm curious as to whether NVIDIA implemented the PS1.4 modifiers in hardware. That'd show that it did special work for that. If performance with modifiers is the same as without them, then likely NVIDIA did some PS1.4 work.)

After all, there have been easier things that NVIDIA could have done but didn't for marketing reasons.
 
Scott C said:
Um, yeah how would this kill competition?

The details of the architectures and how they achieve a more unified, consistent set of features will differ markedly.

Is a Pentium 4 the same as an Athlon 64 just because they both run x86 with MMX and SSE2?

Ok, bad wording maybe. It won't kill competition, but it will make competition much less interesting. It'll kill choice however. You end up selecting between two cards with more or less the exact same featureset, rather than having a choice between a card that does FP32 slow but FP16 fast, or a card that only does FP24 but does it fast.

The P4 vs. A64 isn't really the same thing. Well, for CPUs performance has always been pretty much the only deciding factor, so it's not nearly as interesting as the GPU war. But they have different featureset too. Especially with 64bit on the Athlon64.
 
Humus said:
You end up selecting between two cards with more or less the exact same featureset, rather than having a choice between a card that does FP32 slow but FP16 fast, or a card that only does FP24 but does it fast.

The problem with having FP32/FP16 vs FP24 precision from the two major IHV's is that it makes developers lives more complicated. Also, I don't really think that most consumers really know much about the intricacies of FP32/FP16 vs FP24.
 
DaveBaumann said:
As far as DirectX is concerned developer only need care about default or partial precision - they need not know the underlying internal precisions.

True. However, this is ultimately dependent on a definition of "full" and "partial" precision. Under DirectX 9.0c, full precision is FP32; through DirectX 9.0b, full precision is FP24. The general problem for some developers using differing types of hardware is that they have had to code separate paths to take advantage of certain characteristics of each respective hardware type. This situation could be even worse if there was more competition, and possibly even more product differentiation, in the GPU market.
 
jimmyjames123 said:
Also, I don't really think that most consumers really know much about the intricacies of FP32/FP16 vs FP24.
Most consumers don't know much more than "it has a higher number so it must be better" (I still meet people who think that the GeForce4 MX is better than the GeForce3 -- I suspect they're the majority). They also don't buy the high end, usually. We're talking about enthusiasts, who drive the market. For them the market will become less interesting.

euan said:
I see this an an excellent opportunity for OpenGL to get back some market share. If they can pull the finger out...
I don't see the differentiation of hardware vs. a monolithic DX10 as the thing that will help get OpenGL back. That'd only happen if there'd be truly significant hardware advances that will not be in DX10. Considering the feature set of DX10, I find that unlikely.

Most people, both developers and consumers, just want to have it simple. Developers want a single hardware target that can do everything they want, and consumers want to buy a graphics card and know that it will work with the games they want. The fact that different cards have different features, and show different things, is confusing to consumers and causes work for developers. (Some developers might still create different rendering paths, because of speed issues, but that's still easier than having to make sure the program can run at all with the hardware.)

However, where OpenGL could get back is in the transition period, but that'd be a timing thing that I see likely to fail. There will likely be a period when DX10 cards will still be out of the hands of most people. Developers will have to choose between using DX10 with its new features but not supporting older cards, or using DX9 which will run on all cards, but not using new features. OpenGL will allow the best of both worlds.
 
jimmyjames123 said:
The problem with having FP32/FP16 vs FP24 precision from the two major IHV's is that it makes developers lives more complicated.

As far as DirectX is concerned developer only need care about default or partial precision - they need not know the underlying internal precisions.
 
Lezmaka said:
And what about OpenGL? Nvidia and ATI would still be able to add other features not supported by DirectX and make them available via extensions, with Nvidia's UltraShadow being a current example (unless of course I'm mistaken and it can be used in DirectX)

This is an issue I've tounched on previously. When you are building parts that are to be utilised 90% of the time in one API and only 10% in another, and the 90% API has a fixed dication on specifications, what are you going to spend your transistor budget on? Something that may sound cool in only a minority of games or making DX as fast as apoosible (and speed will be one of your main differentiators).

You'll not get large scale sways from the specification - if it costs more than a handful of gates thn its probably out and you'll be looking for ways to make DX faster.
 
jimmyjames123 said:
True. However, this is ultimately dependent on a definition of "full" and "partial" precision. Under DirectX 9.0c, full precision is FP32; through DirectX 9.0b, full precision is FP24. The general problem for some developers using differing types of hardware is that they have had to code separate paths to take advantage of certain characteristics of each respective hardware type. This situation could be even worse if there was more competition, and possibly even more product differentiation, in the GPU market.

Again, the precision is not really a consideration in these cases since the precisions were taken at the most likely usage scenarios with the number of instructions the different shader models. Political issues aside, the reason FP24 is full precision with SM2.0 is because of the lengths of shaders SM2.0 allows there is never likely to be any issues - SM3.0 adds greater flexability and instruction length, increasing the likelyhood that precision issue may occur if they were coded to the max lengths / capabilities. These are the elements that the developer cares about, primarily - all they need to know is that full precision gives what you need for the shader length hats sensible for the hardware level you are coding to and partial precision can give a speed up if you have source / instructions that only require PP (or you are using shorter shaders).
 
DaveBaumann said:
Again, the precision is not really a consideration in these cases since the precisions were taken at the most likely usage scenarios with the number of instructions the different shader models.

Naturally, precision issues can be avoided as long as the developer knows what he is doing, but I still don't think that the predefined precision state necessarily simplifies things. For instance, the NV GeForce FX SM 2.0+ hardware requires use of partial precision in order to run at decent speeds, while the ATI Radeon 9xxx SM 2.0 hardware is able to run at full FP24 precision as defined by spec, even though the NV cards support longer instruction lengths. I will stand by this statement though (which did not post above for some reason): The general problem for some developers using differing types of hardware is that they have had to code separate paths to take advantage of certain characteristics of each respective hardware type. This situation could be even worse if there was more competition, and possibly even more product differentiation, in the GPU market.
 
And thats not an issue relating to "precisions", thats and issue relating to fundamental decisions made by the hardware vendors. Curiously, this isn't necessarily going to be any different.
 
I must say that i agree with Humus on the matter of MS dictating the specs.
It would be like choose between a pair of diesel´s or levi´s, well sorta...
I must ask this because i haven´t been confirmed before but both NV50 and ATI R500/520 will be on 90nm node correct!?
 
The R500 chips are supposed to be using the new .11 process which will allow them higher clocks than the currant X-800s , lower power usage etc.

the NV50 is also supposedto use a new process... it is unclear which one it will use .

The R500 is supposed to support Shader 3.0 , The NV50 is supposed to have native PCI-X support .

both should be at least 30% faster than the currant NV40 and R420.
 
Back
Top