The most important hardware feature/tech so far

AF. If you said anything else you would be wrong and your homework would be to play all your games with trilinear instead of AF for 2 weeks.
 
ATI's Programmable AA - the thing that breathed some "new life" into seemingly old archiac R3xx hardware with acceptable performance tradeoffs and better quality (if they find a better way of improving quality or performance within the same hardware, it wouldn't be too far off)

I definately agree with AF... reducing as much blur in the area as you possibly could does wonders for your eyes.

On the topic of AF for a moment.. isn't there something about AF that makes it better than trilinear.. I read somewhere it has something to do with AF using an "elliptical shape" or something whereas trilinear was a "box shape"
 
Last edited by a moderator:
The programmable pipeline has been more of an evolution over the last few years... with each revision (starting at D3D7) they've slowly pushed more and more work over to the GPU - I'd argue that apart from maybe D3D8.0 there hasn't been a "single" technology in that sense; programmable pipeline is more of a long-term roadmap.

My vote goes for High Dynamic Range rendering - and I don't mean the funky "lens flare of '05" post processing stuff, I mean the ability to render to, and process, data outside of 8 bit integer quantities. It often amazes me how long we put up with such a restrictive format...

As the technology matures a bit and people tone-down the post-processing effects I'm expecting people to sit back and wonder how we survived without it :smile:

Jack
 
pixel shader 2.0 level fragment programmability. no doubt. so many restrictions were relaxed in this revision (and with an actually very usable implementation, the 9700) that i think it was bigger than hw T&L.
 
Hmm, a lot of the replies here aren't exactly what I had in mind when I started this thread (and I'll admit I worded my post poorly). I was thinking more of things that aren't tied very closely to APIs (and their specifications). Things like jittered/rotated AA or bandwidth-saving tech... those kind of things.

Perhaps the words I meant to post were "independent innovations".

Anyway, if I were to go with the flow of this thread, I'd have to vote for HLSL.
 
Reverend said:
... since the debut of hardware TnL, I might add.

This forum's been pretty boring for some time now so I thought I might as well ask you guys what you think has been the most important 3D hardware technology or feature since (and including, which is important) the debut of DX7 3D hardware.

overall I think that Double Data Rate (DDR) and more specifically GDDR have been huge additions and we wouldnt be where we are now without such advances. Recently the obvious answer would fall on to the lap of PCI Express, however AGP deserves huge props for being able to extend it's life throughout the changes. One thing I wish we had seen in the consumer end market would have been the full utilization of AGP 3.0's ability to have more than one AGP port and upwards of 4GB memory adressing.
 
Reverend said:
Hmm, a lot of the replies here aren't exactly what I had in mind when I started this thread (and I'll admit I worded my post poorly). I was thinking more of things that aren't tied very closely to APIs (and their specifications). Things like jittered/rotated AA or bandwidth-saving tech... those kind of things.

Perhaps the words I meant to post were "independent innovations".

Anyway, if I were to go with the flow of this thread, I'd have to vote for HLSL.

the silicon transistor
 
Reverend said:
Hmm, a lot of the replies here aren't exactly what I had in mind when I started this thread (and I'll admit I worded my post poorly). I was thinking more of things that aren't tied very closely to APIs (and their specifications). Things like jittered/rotated AA or bandwidth-saving tech... those kind of things.

Perhaps the words I meant to post were "independent innovations".

Anyway, if I were to go with the flow of this thread, I'd have to vote for HLSL.

Ok then ROP orthogonality for HDR + MSAA combinations *runs for his life* :D
 
Deathlike2 said:
ATI's Programmable AA - the thing that breathed some "new life" into seemingly old archiac R3xx hardware with acceptable performance tradeoffs and better quality (if they find a better way of improving quality or performance within the same hardware, it wouldn't be too far off)

I definately agree with AF... reducing as much blur in the area as you possibly could does wonders for your eyes.

On the topic of AF for a moment.. isn't there something about AF that makes it better than trilinear.. I read somewhere it has something to do with AF using an "elliptical shape" or something whereas trilinear was a "box shape"

I didn't think AF was to trilinear what trilinear was to bilinear, because you can have AF enabled without trilinear.

The movement to shaders instead of just more polygons/higher res textures has been a big deal. Cards with 10GB/s bandwidth nowadays are still viable thanks to this, and no matter what bandwidth saving techniques or data buses or what not they would not have kept up otherwise.(well, TBDR would have helped a lot, but it's not in use)
 
i'd have said HLSL if the thread title didn't state 'hardware feature'.

hardwarewise - bandwidth and deep pipes. graphics have always been about bandwidth and latencies. even when people have been trying to circumvent those (say, TBDR re bandwidth), it's still been about bandwidth and latencies.

give me a 3GHz simd cpu and 10-20MB of L2-, or even better L1-cache-performing memory and enjoy the show : )
 
Last edited by a moderator:
darkblu said:
i'd have said HLSL if the thread title didn't state 'hardware feature'.

hardwarewise - bandwidth and deep pipes. graphics have always been about bandwidth and latencies. even when people have been trying to circumvent those (say, TBDR re bandwidth), it's still been about bandwidth and latencies.

give me a 3GHz simd cpu and 10-20MB of L2-, or even better L1-cache-performing memory and enjoy the show : )

And that would make a good graphics chip, or just a good companion to one? Why not just a 20MB eDram buffer on a current GPU?
 
Fox5 said:
And that would make a good graphics chip, or just a good companion to one? Why not just a 20MB eDram buffer on a current GPU?

that wouldn't make a good graphics chips, it would make a powerful graphics chip. as in 'the price/performance might suffer a bit'. sticking 20MB of edram to a present GPU would be a step in the same general direction but i doubt it would produce such a good chip. maybe in a couple of years though when the shaders become A. totally unified and B. less underperforming at some GPU-foreign yet trivial ops.
 
Last edited by a moderator:
darkblu said:
that wouldn't make a good graphics chips, it would make a powerful graphics chip. as in 'the price/performance might suffer a bit'. sticking 20MB of edram to a present GPU would be a step in the same general direction but i doubt it would produce such a good chip. maybe in a couple of years though when the shaders become A. totally unified and B. less underperforming at some GPU-foreign yet trivial ops.

What do you mean by a SIMD cpu? Surely not say...a 3ghz P4 (it has SIMD capabilities through SIMD), it doesn't seem like it would have the computational ability a powerful GPU would require. If not a 3ghz P4 then is there something else you had in mind? A 3ghz G5? A 3ghz Power5(which I believe would be multiple times the power of a P4, but don't current cpus offer over 30x the computational ability of a p4 for their shaders)?
 
Back
Top