Will G80 be fully D3D10 compatible?

1) With the G80 unifying Pixel, Vertex and whatever else at API level .. will it be fully D3D10 compatible?
2) Also, won't it's unifying at API(software) level make it slower than if the unifying is done at hardware level like the R600 which does it's unifying at hardware level.

I understand that T&L can run just as fast in software as hardware and thus most new cards make T&L run at software level but with D3D10, it's all new tech and thus shouldn't running it at Hardware level be faster than software level?

US
 
1) Yes.
2) That remains to be seen. Though I think the benefits of a unified shader architecture are larger than the costs.

Unknown Soldier said:
I understand that T&L can run just as fast in software as hardware and thus most new cards make T&L run at software level but with D3D10, it's all new tech and thus shouldn't running it at Hardware level be faster than software level?
Err, what?
Are you referring to programmable vertex processing as opposed to fixed function vertex processing? If so, hardware vertex processing has only ever been "fixed function" at the API level, the hardware has always been programmable.
 
Yeah, maybe there will be beter flexibility with fully programable control over shaders/vertex units than fixed hardware controlling?
 
Xmas said:
1) Yes.
2) That remains to be seen. Though I think the benefits of a unified shader architecture are larger than the costs.


Err, what?
Are you referring to programmable vertex processing as opposed to fixed function vertex processing? If so, hardware vertex processing has only ever been "fixed function" at the API level, the hardware has always been programmable.

I meant like T&L was introduced with DX7 on hardware .. but by DX8/9(can't remember which) I read that T&L was now mainly done at the software level and not done anymore at hardware.

If i'm wrong then just let me know. ;)

US
 
Xmas said:
Err, what?
Are you referring to programmable vertex processing as opposed to fixed function vertex processing? If so, hardware vertex processing has only ever been "fixed function" at the API level, the hardware has always been programmable.

Oh wait.. ok .. I think I understand what you saying now.

Since DX8 .. hardware has become programmable.

Ok .. I see my mistake. What you saying is that although the R600 can do the unifying at hardware level .. it's not neccessary. ok .. got it.

Thanks. :)

US
 
Unknown Soldier said:
I meant like T&L was introduced with DX7 on hardware .. but by DX8/9(can't remember which) I read that T&L was now mainly done at the software level and not done anymore at hardware.

If i'm wrong then just let me know. ;)

US

T&L is still done on the GPU. The Software (driver) translates the T&L setup to a shader program in the case the hardware doesn’t have a special T&L unit. Beginnig with Vista this translation is done by the runtime in the case the driver reports the right caps. This will make the life a little bit easier for the driver developer.
 
A key fact that I'm sure many of you are aware of is that being D3D10 compatable is a binary decision. You either are or you aren't - there is no in between...

With that in mind, I've seen a few news items quoting NV people as saying the G80 will be a D3D10 part. I forget where this was, but I'm sure I found the link from a post either in these forums or on the general B3D website.

Short of some spectacular marketting cock-up, the NV people must know that it's "all or nothing" with D3D10, so if they're making any noises about D3D10+G80 then it's almost certainly a D3D10 part.

Just my interpretation of course ;)

Not that it's done me any good yet, but I just need to keep hassling the right people to see if I can get my hands on one :cool:

Cheers,
Jack
 
The G80 will have full support for dx10, but I don't think that's enough. It's in nVidias tradition to release a part that has all the features but not the performance. I expect it to be a very powerful dx9 part and a fully functional dx10 part.
 
Last edited by a moderator:
oeLangOetan said:
The G80 will have full support for dx10, but I don't think that's enough. It's in nVidias tradition to release a part that has all the features but not the performance. I expect it to be a very powerful dx9 part and a fully functional dx10 part.

I'll tip my hat off if any IHV manages to get at the same time high DX9.0 and D3D10 performance on it's first D3D10 GPU.
 
oeLangOetan said:
The G80 will have full support for dx10, but I don't think that's enough. It's in nVidias tradition to release a part that has all the features but not the performance. I expect it to be a very powerful dx9 part and a fully functional dx10 part.

This way of thinking caused a little problem to nVidia last time. I think with G80 they'll try to have a pretty usable part even with dx10.
 
Sure DX10 is going to be up to 8-10 times faster, so DX9 performance couldn`t possibly match, that`s what you meant,no?:D

It`s likely that they`ll be fast at stuff that`s been around for a while(like DB for example) and not quite so at the new stuff like the GS. Time will tell, but if both sides release pretty close to the DX release, my money is that none will have a next-gen scorcher...IMHO
 
Unknown Soldier said:
1) With the G80 unifying Pixel, Vertex and whatever else at API level .. will it be fully D3D10 compatible?
2) Also, won't it's unifying at API(software) level make it slower than if the unifying is done at hardware level like the R600 which does it's unifying at hardware level.
There is no "unification" at the API level. All that happens at the API level is that the instruction set for vertex shaders will be the same as that for pixel shaders. That's all. It's up to the hardware developer whether or not to use the same unit for both types of shaders.

Unification of vertex and pixel shaders, then, is something that doesn't need DX10. It would have been quite possible for either IHV to produce a unified pipeline part for DX9, for example. The exact same benefits would have been realized. I just think that they didn't for the reason that the instruction sets weren't yet the same.

Unification of vertex and pixel shaders is an optimization that is meant to prevent bottlenecks. If you think of a normal frame, there will be some surfaces which are large quads covering hundreds (or thousands) of pixels. When rendering these surfaces, the pixel shader will be a bottleneck, and the vertex shader will likely be idle. There may also be some sub-pixel polygons in view, particularly for faraway objects. For these surfaces, the pixel shader will likely be idle much of the time.

This isn't something tied to DX10. It's just a fact of the way 3D rendering works. So the question becomes: will the part that adds the required transistors (and die area) to efficiently schedule both vertex and pixel shaders through the same pipeline outperform or underperform the part that used that same area to instead increase the number of pipelines?

With the direction that ATI has taken in developing their hardware, the extra cost of making their hardware unified would likely be small. This is not the case with nVidia's current design.
 
Morgoth the Dark Enemy said:
Sure DX10 is going to be up to 8-10 times faster, so DX9 performance couldn`t possibly match, that`s what you meant,no?:D

It`s likely that they`ll be fast at stuff that`s been around for a while(like DB for example) and not quite so at the new stuff like the GS. Time will tell, but if both sides release pretty close to the DX release, my money is that none will have a next-gen scorcher...IMHO

When you say DX 10 will be 8-10 times faster....does that mean....if the particular DX10 compliant game in question was running on DX 9 it would have 8-10 times lesser the fps? How are you going about talking about the performance? Just curious thanks.
 
The only way in which it makes any sense for DX10 to be 8-10 times faster would be if either:
1) A particular algorithm is just massively more efficient in DX10 than DX9, or
2) The overhead of the API is smaller by a factor of 8-10 (which I doubt, but let's go with it), which would mean, for instance, that very small batches could possibly render 8-10 times faster. But normal gameplay will see a few percent improvement.
 
It was a shot at some funky claims I`ve seen floating around the net;) My personal opinion is that it`s not going to happen..some contorted cases can be built, as usual, but not in real life. Improvements will probably be apparent mostly due to the improved driver model, with its reduced overhead, but DX9.L also has that, so it`s mostly tied to the Vista rearchitecturing rather than with DX10.
 
Back
Top