David Kirk of NVIDIA talks about Unified-Shader (Goto's article @ PC Watch)

Acert93 said:
As a NV40 owner, my concern would be that G70/G71 make nominal steps in improving SM3.0 performance. This is interesting (read: typical PR machine) because NV bragged on their SM3.0 capabilities in 2004, now they are taking the "we are smaller" angle--when in fact that benefits them more than consumers seeing as G70/G71 are in the same consumer cost bracket as ATI's larger chips.
It is possible that it is also benefitting nVidia's customers, the board vendors. We don't know the relative prices compared to board prices between the vendors.

For a consumer like myself getting features that are usable is more important. Check boxes are irrelevant.
That's fine. But the G7x also offers a lot that ATI does not offer. This includes:
1. Multisampling transparency AA (switchable in driver).
2. Better OpenGL support.
3. Better Linux support.
4. Better SLI support.

Now, clearly nVidia has some catching up to do with respect to some aspects of its shader performance (and FP16 render target AA), but that's just because the NV4x is an older architecture (by some 18 months). As long as these companies aren't launching their new product lines at the same time, you can typically expect this sort of leap frogging.
 
Acert93 said:
because NV bragged on their SM3.0 capabilities in 2004, now they are taking the "we are smaller" angle--when in fact that benefits them more than consumers seeing as G70/G71 are in the same consumer cost bracket as ATI's larger chips.

I really think that ATI is missing a golden opportunity here. They should be marketing along the lines of this.

More silicon for your money

The highest transistor to dollar rate in the industry.

etc.
 
nelg said:
I really think that ATI is missing a golden opportunity here. They should be marketing along the lines of this.

More silicon for your money

The highest transistor to dollar rate in the industry.

etc.

That would definitly be nV-esque marketing :smile:
 
nelg said:
I really think that ATI is missing a golden opportunity here. They should be marketing along the lines of this.

More silicon for your money

The highest transistor to dollar rate in the industry.

etc.

You should aply for a job at nVidia's marketing department :LOL:
 
Acert93 said:
Identifying it as marketing is important IMO to get a proper read on what he is saying.

Acknowledged; especially knowing where the original poster you replied to comes from ;)

Done right is relative.

As a NV40 owner, my concern would be that G70/G71 make nominal steps in improving SM3.0 performance. This is interesting (read: typical PR machine) because NV bragged on their SM3.0 capabilities in 2004, now they are taking the "we are smaller" angle--when in fact that benefits them more than consumers seeing as G70/G71 are in the same consumer cost bracket as ATI's larger chips.

That's still within the usual marketing-wash from either/or side. I used a different perspective: once we see the first D3D10 GPUs, wouldn't you then rather think that THIS IS SM3.0 done right? And that doesn't go only for one IHV.

NV could have invested more die space in G70/G71 for better dynamic branching and vertex texturing performance. So while ATI may be late, NV can be said to be incomplete. So what is better: Done better (usable) or hitting a check box?

Yes they could have but since IHVs take their design decisions based on available resources and sales synergy tacticts it is my opinion that:

a) there wasn't enough time nor resources for bigger changes in G7x.
b) a higher chip complexity wouldn't give them today the possible advantage they might have in the high end notebook GPU market.

They can never always win; there has to be a hierarchy and sacrifices aren't avoidable, unless we'd be talking about way higher transistor budgets per manufacturing process, way larger product/refresh cycles and one singled out market like the PC mainstream market exclusively.

For a consumer like myself getting features that are usable is more important. Check boxes are irrelevant.

Careful new features upon introduction are most of the times mostly of developer interest. Assume you'd have a very efficient D3D10 GPU today (which isn't unfeasable anymore), what exactly would you as a consumer do with it today? Run happy-go-merry techdemos?

All perspective. But I think NV has a long enough history, shrewd as it is from a sales and market penetration and OEM contract position, to go the check box mark and hit performance targets in the following generation.

Your point would had been valid if ATI would had released R520 in spring 2004.

As for SM3.0 in general, it is not leaving us any time soon. Both next-gen consoles are SM3.0, and that will strongly influence what we see on the PC side for years, especially cross platform titles. We just saw our first SM2.0 only game (Oblivion) well over 3 years after DX9 shipped (Fall 2002). And if history is any indicator, NV's first DX10 part will provide excellent SM3.0 performance, but will probably be insuffecient for DX10 SM4.0 only/heavy tasks. This is conjecture, but this seems to fit Kirk's comments and past trends. There is no future proof GPU, but SM3.0 should not be ignored unless you plan to upgrade in the next year IMO.

IHVs will most likely tell you that "done right" comes with +1 and even more with +2, unless I've totally misunderstood that updated scheme. Sweet irony it's always been like that at least IMHLO.
 
Ailuros said:
Careful new features upon introduction are most of the times mostly of developer interest. Assume you'd have a very efficient D3D10 GPU today (which isn't unfeasable anymore), what exactly would you as a consumer do with it today? Run happy-go-merry techdemos?


Demos today, games tomorow. Most normal people don't like to upgrade every 6 months.
 
compres said:
Demos today, games tomorow. Most normal people don't like to upgrade every 6 months.

I have severe doubts that you'll see real D3D10 games only 6 months after the D3D10 GPUs arrive. Someone in this thread accurately mentioned that one of the real DX9.0 games on shelves today is Oblivion.
 
Ailuros said:
I have severe doubts that you'll see real D3D10 games only 6 months after the D3D10 GPUs arrive. Someone in this thread accurately mentioned that one of the real DX9.0 games on shelves today is Oblivion.

Does that mean HL2 and FarCry are not "real" dx9? I don't get it.
 
compres said:
Does that mean HL2 and FarCry are not "real" dx9? I don't get it.

A major design goal of Unreal Engine 3 is that designers should never, ever have to think about "fallback" shaders, as Unreal Engine 2 and past mixed-generation DirectX6/7/8/9 engines relied on. We support everything everywhere, and use new hardware features like PS3.0 to implement optimizations: reducing the number of rendering passes to implement an effect, to reduce the number of SetRenderTarget operations needed by performing blending in-place, and so on. Artists create an effect, and it's up to the engine and runtime to figure out how to most efficiently render it faithfully on a given hardware architecture.

http://www.beyond3d.com/interviews/sweeneyue3/

Game engines you are refering to are mixed generation DX7/9.0 hybrids. The baseline is most of the times DX7/8 with a varying amount of DX9.0 effects added to the mix.

Or do you think that just because upcoming games like Crysis will have a D3D10 marketing label on the side, that it'll be a true D3D10 game?
 
Some people seem to think that only a game using every single feature of DX9 is a "truly DX9 game", kinda wrong IMO.
 
_xxx_ said:
Some people seem to think that only a game using every single feature of DX9 is a "truly DX9 game", kinda wrong IMO.

What's the difference then between a game that is mostly based on DX7 and has only few DX9 shaders added to the mix and a game that consists in it's vast majority of DX9 class shaders?

Or else (since that's what the current debate is all about) tell me which exact D3D10 effects the first (labelled as) D3D10 games will exactly have that won't be possible to get rendered on a DX9.0c GPU. Of course might it be a better idea to use a D3D10 GPU for such a case, but that because the compliance seems to match or because those D3D10 GPUs happen to be better/more efficient DX9.0c GPUs in a relative sense?
 
Ailuros said:
What's the difference then between a game that is mostly based on DX7 and has only few DX9 shaders added to the mix and a game that consists in it's vast majority of DX9 class shaders?


If it runs and looks about the same, none for the consumer IMHO. Would rarely be the case, but just as a thought there would be no perceivable difference.

EDIT: to make it clear, only if the game uses some features to accomplish visuals which are not possible with previous DX versions would it be a "true" DX9 game in this context.
 
Last edited by a moderator:
Well, I think the easiest definition of a "true" DX9 (or any similar tech level) game is one that was designed from the beginning to require that tech at a minimum. Fallbacks may be developed for earlier tech at some point, but as long as the primary rendering line isn't changed to make the fallback easier to program, it should remain a (your favorite tech level here) game.
 
_xxx_ said:
If it runs and looks about the same, none for the consumer IMHO. Would rarely be the case, but just as a thought there would be no perceivable difference.

EDIT: to make it clear, only if the game uses some features to accomplish visuals which are not possible with previous DX versions would it be a "true" DX9 game in this context.

That would be one perspective, but then you have to evaluate what the percentage of DX9.0 shaders in game X is. If we'd be talking for example only about a couple of pity reflections in some water pits what then?
 
HL2 IMHO is a piecemeal DX9 game. Is is, by and large, a traditional DX7/8 game, with DX9 shaders slapped in. The Source engine was not written from the ground up to take advantage of DX9. The game looks "good" mostly because of good textures and artistic direction. Alot of hype was drumed up by Valve presentations on some of the particular shaders they were using, but overall, the game looked fairly DX7-8 except for a few areas where it was obvious they were using more complex shaders. Even Lost Coast stills lookes like a Hybrid to me.

This is not to say HL2 isn't a good looking game (for its time), but given what we've seen can be done with just PS2.0, it is obviously not representative of what a true "from scratch designed around SM2.0" engine could look like.

Frankly, when you look at the game's file formats and map structures, and its entity system, it looks more like a major evolution of what they had for HL1, then a complete rewrite, even though Valve will protest and claim it was a rewrite. The stolen source code says otherwise.
 
Ailuros said:
That would be one perspective, but then you have to evaluate what the percentage of DX9.0 shaders in game X is. If we'd be talking for example only about a couple of pity reflections in some water pits what then?

No idea, just playing devil's advocate here. Again, I'm talking about what the average Joe sees in there.
 
_xxx_ said:
No idea, just playing devil's advocate here. Again, I'm talking about what the average Joe sees in there.

Is that the same average Joe that believed years ago in the 100 T&L games? *runs for his life* ;)
 
Back
Top