DeltaChrome review at hexus

I've never heard of a wider bus for possible variations of DC.

ah yes, the visionary of the industry!

ROFL Althornin. I think people also tend to forget how close we are to the next generation of products and DC still not being widely available.

Anyway I'm still curious to see what the exact and real performance drop is for 2xSSAA or it's 16x AF. That newer set of drivers shouldn't have any bandwidth saving features enabled either; the results look too close to the first set.
 
Ailuros said:
Anyway I'm still curious to see what the exact and real performance drop is for 2xSSAA or it's 16x AF. That newer set of drivers shouldn't have any bandwidth saving features enabled either; the results look too close to the first set.

Just a thought : how to have bandwith saving features in a driver?
Isn't that done in hardware only?
I'm really curious to know more as I thought this could only be done in hardware and not software as those who tried to do it in software failed so far (3dfx and their infamous HSR)
 
vnet said:
Ailuros said:
Anyway I'm still curious to see what the exact and real performance drop is for 2xSSAA or it's 16x AF. That newer set of drivers shouldn't have any bandwidth saving features enabled either; the results look too close to the first set.

Just a thought : how to have bandwith saving features in a driver?
Isn't that done in hardware only?


I think what he means is that the driver is enableing any of the bandwidth saving features in the card... It is my understanding that you can enable/disable features that are built into the core through the drivers, and it is in this way the bandwidth saving features are disabled...

Granted, due to "stronger" cpu's we might see some better results from HSR in the drivers (alá 3dfx), but it would be really surprising if S3 used resources on that instead of fixing the problems they've got...
It would be more probable if they tried to implement it after most of the bugs were gone, and tout it as a speed increase (alá Nvidia did with some Detonators)...

Or I could be way of base :)
 
I'm more curious about the S4 performance though. I could be wrong, but wouldn't the S4 be a good choice for an integrated chipset? The Savage4 was a decent card, and even today it is still shipped (not exactly the same core, but almost identical) in the form of integrated chipsets (lots of athlon notebook use a twister chipset for instance). But it's getting old...
 
vnet said:
Ailuros said:
Anyway I'm still curious to see what the exact and real performance drop is for 2xSSAA or it's 16x AF. That newer set of drivers shouldn't have any bandwidth saving features enabled either; the results look too close to the first set.

Just a thought : how to have bandwith saving features in a driver?
Isn't that done in hardware only?
I'm really curious to know more as I thought this could only be done in hardware and not software as those who tried to do it in software failed so far (3dfx and their infamous HSR)

How would want to operate Hardware without the correct software anyway?

Example: Parhelia is capable of it's house own implementation of HW based displacement mapping; with the only other difference that it can't be used because the driver doesn't expose it.

Bandwidth saving features can in many cases today be enabled or disabled through the drivers of a card; you can switch on/off let's say early Z, hierarchical Z or whatever else a card supports. If you cut off the software calls, the responsible hardware units for those features will not work. In the case of DC I suspect a minor bug.

Is that simple enough?

***edit: sorry Garibaldi; missed initially your post. It was sufficient already.
 
Granted, due to "stronger" cpu's we might see some better results from HSR in the drivers (alá 3dfx), but it would be really surprising if S3 used resources on that instead of fixing the problems they've got...
It would be more probable if they tried to implement it after most of the bugs were gone, and tout it as a speed increase (alá Nvidia did with some Detonators)...

I'm sure that all or the majority of today's driver sets contain CPU specific instructions to increase performance even more (which should not be confused with that HSR thing; it's actually more software T&L than anything else).

NVIDIA did have healthy speed increases in the past but I'd say that they were through entirely different methods; I personally consider those I'm aware about clever outsourcing of idle units, which is something completely different too, yet should happen at other IHVs to some degree also.
 
Ailuros said:
***edit: sorry Garibaldi; missed initially your post. It was sufficient already.

Hehe NP :)
Just nice to see I was correct.

Ailuros said:
Granted, due to "stronger" cpu's we might see some better results from HSR in the drivers (alá 3dfx), but it would be really surprising if S3 used resources on that instead of fixing the problems they've got...
It would be more probable if they tried to implement it after most of the bugs were gone, and tout it as a speed increase (alá Nvidia did with some Detonators)...

I'm sure that all or the majority of today's driver sets contain CPU specific instructions to increase performance even more (which should not be confused with that HSR thing; it's actually more software T&L than anything else).

NVIDIA did have healthy speed increases in the past but I'd say that they were through entirely different methods; I personally consider those I'm aware about clever outsourcing of idle units, which is something completely different too, yet should happen at other IHVs to some degree also.

Interesting. I didn't think it likely that software HSR was about to make a "comeback" due to the extra load it adds to the cpu...

But wouldn't using software T&L soon become "outdated" (sorry can't come up with a better word) due to the evolution of graphic cards and the increased complexity in games wrt to AI?

I can see how it would be useful for older cards if the system is gpu bound, but I would imagine that the dev time to create the Software T&L would be too costly, since I those systems would probably be a minority...
 
But wouldn't using software T&L soon become "outdated" (sorry can't come up with a better word) due to the evolution of graphic cards and the increased complexity in games wrt to AI?

I'm not really qualified to give an accurate prediction on that one as a layman, rather a lucky guestimate.

I'd say that it highly depends how game code will evolve and into what direction. There driver developers will see for themselves whether it makes sense or not.

Take SS:SE as an example. No it isn't a fully optimized T&L game, but then again how many are even today? Under display adapter I get the following listed in the game: Radeon 9700 PRO x86/MMX/3DNow!/SSE

Coincidence?
 
Ailuros said:
I'm not really qualified to give an accurate prediction on that one as a layman, rather a lucky guestimate.

I'd say that it highly depends how game code will evolve and into what direction. There driver developers will see for themselves whether it makes sense or not.

Take SS:SE as an example. No it isn't a fully optimized T&L game, but then again how many are even today? Under display adapter I get the following listed in the game: Radeon 9700 PRO x86/MMX/3DNow!/SSE

Coincidence?

Ok, but thanks for your thoughts so far :)

Maybe someone else can shed some more light on the topic, though perhaps not in this thread as I seem to have moved it way OT...
 
Is this card made to be sold or what :LOL: , if they wait any longer, they can stop producing it, as the competition will walk over it.
 
Back
Top