Having more fill and texture sample rate isn't something bad to have. As bandwidth and transistor density continues to increase so will data throughput. At some point there will be a switch I think, from pixel to voxel rendering, where you sample from volume textures and render into volume textures. To enable this you need all the fill rate you can get. At the same time this will be linked to realistic volumetric physics on the GPU and there again you need all the bandwidth you can get.
Aren't voxelized object worse when there's need to render their interactions while with simple pixelized object we have simple and well developed techniques needed to easily add real life look on their interaction? Aren't just voxels just a quick patch for rendering complex 3D object on hardware available doing it in 2D space?
That wasn't the point though. The point was more: at what cost?
GPUs are getting larger, hotter, with more noisy coolers all the time, we get more GPUs on a card and all that... (because the 'natural' increase in bandwidth and transistor density isn't enough).
And Lara Bee is running cold as polar ice and not wasting any energy? :Mrgreen: Seems not only hype about proposed LB performance are flying around but also some hype about how greenish performance per watt LB has. Is that a reason why estimated LB TDP was 300W just not clearly stated is it for this 45nm revisions, or for older 65nm ones if there were any of them? And excellent tdp is the reason Intel waiting for straight to 32nm Lara Bee release while cleaning out some performance bugs?
What I'm saying is basically this:
Since G80 was introduced, 'nothing' happened. Most games still have DX9-level graphics, sometimes pimped up slightly with a handful of DX10 candy.
And the real reason for it is ....... G80 quasi DX10 support, or they support it as max as they could do on pretty advanceed and still relaively new G70 architecture (which was 9.0c btw). So the lack of features and all of that mumble of dx10.1 numbers (lacking from nV) meant that not everything is int the numbers itself, as
Charlie try to remember but in nVidia's ability to release their G80 chip 6 month before competition and blackmail MegaCorps to kick out
what is suppose to be real DX10 support from their future Vista release and put
something that only goes by name dx10 in so much expected and bad performing future MegaCorps OS.
So in fact we're been deprived for all that DX10 has to offer onR600 architecture in favor of nV dx10 deal which was nothing more than eyecandicized dx9.0c as you state. In fact we should look something which should much like resemble todays dx11 only 2.5yrs before. And dx11 well it would brought never tessellation engine after all.
I don't see DX11 as a big step in flexibility.
The only thing it really adds is tessellation... but ironically enough it is again a fixed approach, not a flexible one.
Even on DX10 hardware you can program tessellation through Compute Shaders.
You coldn't just pass to point out how nothing new is really there in dx11
But you forgot to emboss
Even on DX10 hardware you can program tessellation through Compute Shaders on ATi based cards, and dx10 as i mention in above reply was exact something that should introduce new hardware tessellation ability itself. While dx11 and R800 series itself brought some texture compression improvements and no drop in performance for old compression methods. And tessellation itself heavily reduces need for memory bandwidth to produce same rasterization setup. So i'd say that these are pretty big things ATi introduced in it's dx11 engine. I see you're still looking for a good reason to explain to your friends, to upgrade onto DX11
The up of the DX11 approach is that it needs very little transistors to be implemnted.
It's not nice to see how people wear polarized glasses for different weather. Why DX11 was
so lightweight to implement for ATi was caused by preexist of rudimentary tessellation on ATI VLIW engine since first ATi DX10 chip (R600), so all basic tessellation setup was there and all they need extra r&d for improving it and implementing it in
Compute Shading algorithms and get support by Microsoft dx11. It's ATi architectural design, maybe even to much in-situ, and that couldn't be provided barely from dx11 API by MegaCorps itself or any third party.
I just hope that same VLIW engine could carry additional capability to cope with next thing on horizon FMA. Hope that ATi wont make lack of implementing FMA into marketing war as Huang is dong for last year and a half with constant r&d troubles for GT200 & G300 now. And not to forget that all that indirection capability of G300 (Fermi renamed after Larabee->Cypress) doesnt look extremely appealing just if it ain't so silicon and research inexpensive as further tessellation or compression advancements are.