David Kirk of NVIDIA talks about Unified-Shader (Goto's article @ PC Watch)

It might be good for nVidia to bet on good performace today and ignore the future. The way I see it though, when I buy a video card I want it to last for at least 2 years.

Right now I still have my old 9800 and it still plays far cry and the new games fearly well, makes me wonder what would have been my situation if I had bought a geforce FX back then. I remember the FX outperformed the radeon on UT2003 which was the game I mostly played back then and UE2 was used a lot in other games too.
 
The FX was really an outlier as far as nVidia's products go. All of nVidia's products before and after have been better in terms of longevity and performance with new features.
 
cho said:
4 "vertex texture units" ? AFAIK, it should be 8.
Not so sure about that. VTF on G7x is hardly fetch-rate limited and rarely used, so I don't see the point of having 8 units for it.
 
I think NVidia is banking on "performant" GS and VTF being practically irrelevent in upcoming titles. Dynamic branching hasn't even "taken off" yet. I think NVidia is betting that within the usable lifespan of the G80 (a year or so) most DX10 titles will still rely heavily on PS, and only dabble with GS/VTF rather than craft the entire engine around it
That is if we assume that G90 will come in summer 07 and will be based on a new architecture. And AFAIK G80 is seen as a basis for all NVs DX10 chips.

I think it's more possible that G80 will be for G90 something like NV30 was for NV40. (And let's hope that it won't be like NV30 in all the aspects...)
 
Gubbi said:
He's right that a non-unified shader (NUS) architecture is faster than a US, but that is for existing workloads. The ratio of VS to PS has been fairly constant over the years, workloads reflect this.

Sure? Because it seems like I demostrated once that a unified shader architecture was faster that a non unified one with less equivalent shader processing power and using existing (now I would say legacy) game workloads ;)
 
DegustatoR said:
That is if we assume that G90 will come in summer 07 and will be based on a new architecture. And AFAIK G80 is seen as a basis for all NVs DX10 chips.

I think it's more possible that G80 will be for G90 something like NV30 was for NV40. (And let's hope that it won't be like NV30 in all the aspects...)

Is there any timeline for D3D10+1 and D3D10+2?

Apart from that, you sure sound encouraging LOL ;)
 
RoOoBo said:
Sure? Because it seems like I demostrated once that a unified shader architecture was faster that a non unified one with less equivalent shader processing power and using existing (now I would say legacy) game workloads ;)
Well, that still depends upon the cost for implementing a unified shader efficiently, and your test was still a synthetic test and may not hold up to reality.

But yes, a USA can certainly show a significant efficiency improvement even in current games (hell, even old games).
 
Chalnoth said:
Well, that still depends upon the cost for implementing a unified shader efficiently, and your test was still a synthetic test and may not hold up to reality.

But yes, a USA can certainly show a significant efficiency improvement even in current games (hell, even old games).


I sincerely hope USA can improve Mini FPS Framrates , not just DX10 title per se.
 
trumphsiao said:
I sincerely hope USA can improve Mini FPS Framrates , not just DX10 title per se.

Do you mean minimum framerates? Depends and will most likely still depend on the CPU used.
 
It is an interesting thot tho, and I've wondered it myself. I'll be looking forward to [H]'s FPS graphs to take a peak at that. It stands to reason it might, since you're no longer dependant on "guessing right" a year or more in advance on the relative workload, but instead can dynamically shift resources to meet whatever the app is throwing at you. Entirely? No, of course not --but better than you could before. So, maybe not absolute minimum if that is being driven by CPU or bandwidth (or probably a couple other factors I'm not thinking of at the moment). But maybe a smoother, less jerky graph over all.
 
I don't think there would be a significant change to framerate variability. The primary benefit would be better load balancing within a single frame, such that all frames should benefit a rather similar amount.

I seriously doubt that the vertex/pixel load ratio changes that dramatically from place to place within a single game.
 
Ailuros said:
Is there any timeline for D3D10+1 and D3D10+2?
When thinking about it, this D3D10+1/+2....(whatever) model is really kicking my nuts.

Why is it that MS suddenly uses this kind of weird nomenclature for defining "capabilities" of the HW ? It not only sounds crazy, it´s also a strange decision to use this differentiation in, well, basically (v.10) one and the same API.

Currently, i´m heavily confused by it, apart from the fact that we don´t know what exactly they will change, that is.

Guess I should really stop thinking about it, LOL
 
Sunrise said:
When thinking about it, this D3D10+1/+2....(whatever) model is really kicking my nuts.

Why is it that MS suddenly uses this kind of weird nomenclature for defining "capabilities" of the HW ? It not only sounds crazy, it´s also a strange decision to use this differentiation in, well, basically (v.10) one and the same API.

Currently, i´m heavily confused by it, apart from the fact that we don´t know what exactly they will change, that is.

Guess I should really stop thinking about it, LOL

Your confusion is as good as mine. One reasonable theory I've read on these boards here is that they abandoned the former SM2.0, SM2.0_extended, SM3.0 naming scheme and moved to that one.

Whatever those +1 or +2 stand for I have a somewhat hard time believing that +1 for instance will be only a year apart from D3D10.
 
geo said:
It is an interesting thot tho, and I've wondered it myself. I'll be looking forward to [H]'s FPS graphs to take a peak at that. It stands to reason it might, since you're no longer dependant on "guessing right" a year or more in advance on the relative workload, but instead can dynamically shift resources to meet whatever the app is throwing at you. Entirely? No, of course not --but better than you could before. So, maybe not absolute minimum if that is being driven by CPU or bandwidth (or probably a couple other factors I'm not thinking of at the moment). But maybe a smoother, less jerky graph over all.

Would you be in deep shock if I'd remind you that Sweeney claimed around 500GFLOPs for shading alone in the UE3 engine?
 
Chalnoth said:
I seriously doubt that the vertex/pixel load ratio changes that dramatically from place to place within a single game.
I'm pretty sure it does change a lot. Just think about when you have lots of high-poly characters come into the screen, or when your view changes from a simple indoor area to a complex outdoor one. As soon as a high poly model jumps into your draw list (it may not even be visible on screen) then your framerate could tank. If you have a particle simulation going for explosions, then you get sudden bursts of high poly wordloads. When your model LOD changes, usually it's a sudden shift in vertex count but gradual change in pixel count.

In all likelihood, the vertex/pixel workload does change quite dramatically from scene to scene, frame to frame.
 
I'll grant you the indoor/outdoor point, but I doubt many of the others are of much concern, most of the time.
 
Ailuros said:
Your confusion is as good as mine. One reasonable theory I've read on these boards here is that they abandoned the former SM2.0, SM2.0_extended, SM3.0 naming scheme and moved to that one.

Whatever those +1 or +2 stand for I have a somewhat hard time believing that +1 for instance will be only a year apart from D3D10.

+1 is the next release and +2 is the release after that. they simply don't have publicly announced names since it is still early (recall when DirectX Next gots its final name of Direct3D 10). occam is your friend.
 
db said:
+1 is the next release and +2 is the release after that. they simply don't have publicly announced names since it is still early (recall when DirectX Next gots its final name of Direct3D 10). occam is your friend.

So does anyone have at least an estimate on projected release dates and what each could contain?
 
Back
Top