WinHEC slides give info on early spec of DX in longhorn

I'm kind of impressed. Especially since it appears that MS has again decided to take OpenGL seriously, at least a little bit. What i can't believe is that it looks like they are finally updating WGL.... 1.2 support... get out of here :)
 
And by 2010 when we're at GL 3.0 they will implement GL 1.3. And by 2020 they will finally update WGL to support multiple devices.
 
It will if IHV have not provided OpenGL ICD in their driver. Or if you are using drivers shipped with OS (these do not contain ICD). Nothing changed in that respect to WinXP here.
 
Clootie said:
It will if IHV have not provided OpenGL ICD in their driver. Or if you are using drivers shipped with OS (these do not contain ICD). Nothing changed in that respect to WinXP here.

According to the slide it would be converted even with ICD
(ICD in IHV-written code part)
OpenGL ICD <-> OpenGL32-OGL->D3D <-> D3D9
 
One interesting thing I noted is the loss of performance for fixed function on the GeForceFX cards, since fixed function will be implemented by shaders.
 
ET said:
One interesting thing I noted is the loss of performance for fixed function on the GeForceFX cards, since fixed function will be implemented by shaders.
Where do you get this from?

The high fixed-function performance is likely due not to special hardware, but rather due to highly-optimized shaders.
 
Chalnoth said:
The high fixed-function performance is likely due not to special hardware, but rather due to highly-optimized shaders.
Uh... lemme put it this way...: "Where did you get THAT from?!" :oops:
 
Chalnoth said:
ET said:
One interesting thing I noted is the loss of performance for fixed function on the GeForceFX cards, since fixed function will be implemented by shaders.
Where do you get this from?

The high fixed-function performance is likely due not to special hardware, but rather due to highly-optimized shaders.

DW04018_WINHEC2004.ppt
Slide 9.
 
Since it appears that will be done at the API level, I suppose it could well lower performance even without special hardware, as it may not be possible to always detect when a "fixed function shader" is being executed.
 
Chalnoth said:
Since it appears that will be done at the API level, I suppose it could well lower performance even without special hardware, as it may not be possible to always detect when a "fixed function shader" is being executed.
That's true, but beside the point, which is : Since when do FXes emulate fixed function code in shaders? :oops:
 
Since when do they not? I think most people assumed it was added fixed-function hardware that allowed for the higher fixed-function performance. I don't remember it ever being stated.

It does seem apparent now that a large part of the performance problem of the NV3x hardware was that it was too hard to write a properly-optimizing compiler for. That is, the instruction set governed so much about how the underlying architecture worked that in the limited time available for shader compiling it just wasn't possible to get all of the performance kinks worked out.

Don't you think it's possible that this hardware was built by engineers who were used to building fixed-function hardware, and had figured out, before the compiler was actually written, how to make fixed-function run quickly on the shader hardware?
 
It's certainly possible.
However... off hand, I can't see anything pointing towards this explanation.
Eg. why would HW engineers mess with writing FF-substituting shaders in the first place? Seems unlikely for them to mess with the driver, FF or not.
 
Well, they kinda had to, considering the unified driver architecture that nVidia uses. Part of this unified driver architecture resides on the GPU, and should allow, say, a GeForce 6800 Ultra to operate using TNT drivers.
 
Chalnoth said:
Well, they kinda had to, considering the unified driver architecture that nVidia uses. Part of this unified driver architecture resides on the GPU, and should allow, say, a GeForce 6800 Ultra to operate using TNT drivers.

Huh? I thought the software was backwards compatible with the hardware not the other way around :?:
 
Well, what's interesting, is that using position invariance with vertex programs, is actually faster than doing the transform of the position in the vertex program. So, they must have some hardware there that they can toggle on and off, and mix and match. I dont think it's quite what people think (a complete totally separate unit for fixed function T&L), but some parts must be shared.
 
Chalnoth said:
...should allow, say, a GeForce 6800 Ultra to operate using TNT drivers.
Isn't the "unification" done backwards only? :?: Last time I checked (GF4 era) you DID need driver support to run a given card.
 
I asked Cass Everit on this issue because I always thought that GF3 had both a fixed function T&L unit and a VS unit and that with the GF4 the T&L was replaced with another VS. Seems I was wrong cause he answered me this :
Zeross,
It essentially uses the same computational resources, but fixed-function does not use a stored program model on GeForce hardware.

Thanks -
Cass

Basically that means that there is no dedicated hardware for fixed function T&L, but on the other hand it's not an "emulation" like the R300 is doing.
 
Back
Top