Simon F said:You see they're going to be quantum theory based. All games in the near future will send all objects through the pipeline rotated and translated to all possible positions. The pixel shader then will make all rendered pixels belonging to an incorrect position fully transparent. This thus eliminates the need for vertex shaders._xxx_ said:hovz said:no game in the near future will make any real use of vertex shaders.
???
QED.
hovz said:no.
im still dumbfounded that a programmer just said that.. :?
Blastman said:I don’t think the NV40 has better per/clock performance over the R420. If we look at 3Dmark2005 … techreport …
6800GT …4669
X800pro …4631
Fill rate …
X800pro … 12 x 475 = 5700
6800GT … 16 x 350 = 5600
… … they are very close. The 3Dmark2005 scores (good raw performance metric) for the GT and Pro are almost identical. The GT has faster memory 1000 vs 900 for the Pro which probably gives it a bigger advantage than the Pro’s slight filtrate advantage -- but the per-pipeline-per-clock performance is close between the NV400 and R420. The NV40 is running DST shadows so it is actually running a different code path than the R420. The 6800GT 3Dmark2005 score drops ~ 10% when it doesn’t use DST, so the X800pro would actually easily outrun the 6800GT in 3Dmark2005 with the same workload. This suggests the X800 has faster per-pipeline-per-clock performance than the 6800 in DX9. The 6800 looks faster in OGL (based on game benchmarks). Of course this assumes the driver/compilers are optimized about the same -- but that is another discussion altogether.
The problem with the X800 XT PE higher clock rates is that if you keep memory bandwidth the same and clock only the GPU faster, the GPU becomes less efficient per/clock because the GPU sees less bandwidth/clock.
Inane_Dork said:Isn't that load removed with a deferred renderer?
Don't hit me.
Are you arguing about the amount of vertices or the use of shaders? Besides, 3DMark05 is vertex intensive simply because of the sheer number of vertices; technically one could argue that it's also vertex shader intensive too because all vertex processing is done via shaders.hovz said:no game in the near future will make any real use of vertex shaders...
...which games in the near future will be vertex heavy to a degree even close to that of 3dmark05?
I was... kidding. Sorry.DeanoC said:Nope, deferred rendering just defers pixel shading to later. Still requires at least the same amount of vertex work.
I.e. You need to store positions and normals in a G-Buffer, so you have to transform/skin etc. them into that buffer.
Neeyik said:Are you arguing about the amount of vertices or the use of shaders? Besides, 3DMark05 is vertex intensive simply because of the sheer number of vertices; technically one could argue that it's also vertex shader intensive too because all vertex processing is done via shaders.hovz said:no game in the near future will make any real use of vertex shaders...
...which games in the near future will be vertex heavy to a degree even close to that of 3dmark05?
Anyway, don't you read our reviews or keep up with discussions? Look at Wavey's X800 preview - particularly the Splinter Cell testing - and take a look at this thread.
DemoCoder said:Doesn't 3dmark do shadow volume extrusion via vertex shaders? This is certainly something that won't be done via VS in the future. I believe the future lies in programmable tesselation units, in which case, vertex shading is a subset. Some architectures will just make the tesselator unit general purpose enough to do both, others may keep the tesselate/vertex shade dichotomy. For example, on CELL-style architectures, the CPU may just do everything and Nvidia may just leave transformation out of the GPU. Or, it could keep a pared-down transform unit, but wouldn't need a complex VS3.0 one.
DemoCoder said:Doesn't 3dmark do shadow volume extrusion via vertex shaders? This is certainly something that won't be done via VS in the future.