Vista delays = D3D10 GPU delays ?

What worries me more about Vista delays is how much impact this will have on the value/mainstream graphics.

I think we can all agree that we want SM4 features/performance as soon as possible at the lower-end of the graphics scale, simply to make a more-compelling market for publishers to release nice PC versions of games.

What I'm envisaging here is that the expected flurry of Vista-inspired hardware sales (including graphics cards and IHV IGP, not Intel IGP) will be delayed significantly - putting a big hole in the IHVs' revenues - which I imagine will get them pushing-back the release of lower-cost parts until the point of pain (i.e. when value/mainstream sales suffer because they're only SM3 parts, not SM4).

A 3-month delay in Vista, say, might translate into a 6- or 9-month delay in value/mainstream SM4. I hope not...

Jawed
 
It seems to I've seen the suggestion around here one or two places that, for ATI at least, unified might provide some silicon savings in the lower 1/2 of the asic range due to higher efficiency. If that's true, it would incentivize them to go sooner rather than later down there. And there was that whole "R6xx" thing. . .
 
I would tend to agree - but then look how long it took for ATI to put out a 90nm northbridge.

Jawed
 
Jawed said:
I would tend to agree - but then look how long it took for ATI to put out a 90nm northbridge.

Jawed

The noises both IHVs have been making about Vista creating an enlarged opportunity for discrete does support what you're pointing at.
 
geo said:
It seems to I've seen the suggestion around here one or two places that, for ATI at least, unified might provide some silicon savings in the lower 1/2 of the asic range due to higher efficiency. If that's true, it would incentivize them to go sooner rather than later down there. And there was that whole "R6xx" thing. . .

Is that what you suggested we will never see several numbers of Quad instead of Shader Arrays ??? However from my point of view, I think either G80 or R600 is not an unified iteration in hardware but an unified Binary Code in Driver.
 
trumphsiao said:
Is that what you suggested we will never see several numbers of Quad instead of Shader Arrays ??? However from my point of view, I think either G80 or R600 is not an unified iteration in hardware but an unified Binary Code in Driver.

Common "facts" on this... R600 is unified iteration in hardware. G80 is only unified iteration in software.
 
Jawed said:
Sigh :oops:

Oh well, at least that intensifies my point :LOL:

Jawed

For a northbridge, why should they move to 90nm when it's likely cheaper to produce it at 110 nm?
 
Blazkowicz_ said:
will Vista even ship with D3D10?
D3D10 can come later. who cares? 9700 pro was out monthes before DirectX 9 and it didn't hurt it a single bit..

actually it DID hurt 9700 sales, but not directly.

when 9700 pro was released, there were NO DX9 benchmarks, so r9700 was benchmarked against ti4600 with old benchmarks, and often showed only some 25% performance increase over ti4600, which left some disappointed in 9700pro, and made them wait for gf5800. ( which in reality was the real disappointment, but that was not known then)

also gf5800 did ok on old benchmarks before it could be benchmarked with new benchmark programs which showed it's weaknesses in FP/DX9 performance.

only when DX9 benchmarks appeared, r9700 could really show it's muscles.
 
benching on 3d mark 2001 and the quake 3 engine surely was not stressful enough but even in that case I remember the very massive increases with AF and the AA+AF combo.
 
hkultala said:
actually it DID hurt 9700 sales, but not directly.

when 9700 pro was released, there were NO DX9 benchmarks, so r9700 was benchmarked against ti4600 with old benchmarks, and often showed only some 25% performance increase over ti4600, which left some disappointed in 9700pro, and made them wait for gf5800. ( which in reality was the real disappointment, but that was not known then)

also gf5800 did ok on old benchmarks before it could be benchmarked with new benchmark programs which showed it's weaknesses in FP/DX9 performance.

only when DX9 benchmarks appeared, r9700 could really show it's muscles.

Errrm. What I vividly recall about R9700pro benchies vs ti4600 was a bitch-slapping of epic and historic proportions in AA performance (~2.5x is the impression in my mind 3.5yrs later without running off to check). Such that to this day I still think of R9700 as the inflection point where, at the top anyway, some level of AA (even if "only" 2x) became the standard rather than the "sometimes".

And that didn't require DX9.

Edit: My IRC homeys got my back: http://img.hexus.net/v2/gfxcards/ati/9700pro/SS28x.gif
 
Last edited by a moderator:
geo said:
Such that to this day I still think of R9700 as the inflection point where, at the top anyway, some level of AA (even if "only" 2x) became the standard rather than the "sometimes".

More importantly the 9700 Pro was - wihtout a doubt - the first card that could deliver AA and AF at the same time. Granted it might have been 2xAA + 8xAF at 1024x768, but it was a definite "inflection point" in 3D.
 
this is precisely the settings at which I run medal of honor and rtcw on my ti 4200 (been playing them recently as it didn't looked as nice and fast when I played them on voodoo5 ;)). framerates are very good.
for a 9700 pro as its time, think 4x/16x , max quality. and the rotated grid AA, which makes 4x on par with voodoo5 regarding edges. 4x on geforce 4 and geforce FX is worthless.
 
I think that this "delay" has nothing to do with D3D10 GPU release schedule and they should come out in Q3/4 2006 as they were supposed to prior to this "delay".
 
Back
Top