Panajev2001a
Veteran
Then you had to pull Volume Modifiers and Hardware Translucency sorting from the next PVR cards, so no more awards for you .
(beating a dead horse take 20)
(beating a dead horse take 20)
I'm curious about the design.Panajev2001a said:Cough... Graphics Synthesyzer... cough...
Hey, I am a fan of fat 2,560 bits data-paths .
I'm not sure if i'm right but isn't every 2 pipes share a tmu. That is why when textureing and filling the polygons its maximum fillrate is cut in half .Simon F said:I'm curious about the design.Panajev2001a said:Cough... Graphics Synthesyzer... cough...
Hey, I am a fan of fat 2,560 bits data-paths .
I know it has 16 texture units, but how many clocks does it take for each read/modify/write process?
If you have two, small sequentially adjacent translucent polygons that overlap each other, do you get full performance?
As explained previously....Panajev2001a said:Then you had to pull Volume Modifiers and Hardware Translucency sorting from the next PVR cards, so no more awards for you
Dunno about that, but I did several tests with alpha blending enabled/disabled on complex scenes (200.000+ triangles per frame) and couldn't measure any relevant performance difference.Simon F said:If you have two, small sequentially adjacent translucent polygons that overlap each other, do you get full performance?
Ailuros said:I understood your point perfectly well; my question was going elsewhere and that namely in the grid pattern direction.
I'd probably should ask first if past a certain amount of samples (let's say 16x) a sparse grid really makes any significant difference anymore, yet I can theoretically see a very high edge equivalent resolution even on just an 8x sparsely sample MSAA pattern for example ( 8*8 ).
Clearly the more samples the better, yet I don't think the resulting grid is irrelevant after all.
Simon F said:I'm curious about the design.Panajev2001a said:Cough... Graphics Synthesyzer... cough...
Hey, I am a fan of fat 2,560 bits data-paths .
I know it has 16 texture units, but how many clocks does it take for each read/modify/write process?
If you have two, small sequentially adjacent translucent polygons that overlap each other, do you get full performance?
Simon F said:As explained previously....Panajev2001a said:Then you had to pull Volume Modifiers and Hardware Translucency sorting from the next PVR cards, so no more awards for you
1) There are no Volume Modifiers in DX or OGL - just stencils. MS would not add the more efficient VMs because it was damn-near impossible to do on IMRs.
2) Yes that's a shame but PC developers were too [insert as applicable] to disable their translucency sorting code in the games so there was no point in leaving it in the hardware.
Unfurtunately it can't.MfA said:Can the rasterizer in the graphics synthesizer even work on multiple primitives at a time?
MfA said:They have a patent on that (although personally I dont find it non obvious ... but hey).
[0018] A preferred embodiment of the present invention provides a method and apparatus, which are able to implement volumetric effects, such as forming clouds, efficiently. To do this it provides a set of depth buffer operations which allow depth values to be manipulated arithmetically. These operations allow a depth or blending value to be formed that can be representative of the distance between the front and back of the volume. After derivation, these values can be passed to a texture blending unit in which they can be used to blend other components such as iterated colours, textures, or any other source applicable to texture blending. The result from the texture blending unit can then be alpha blended with the current contents of the frame buffer.
[0059] A further embodiment allows the volumes to be processed as monolithic objects. As the volume would be presented as a "whole" it is possible to handle multiple per pixel entries and exits to/from the volume, as such concave volumes can be handled. Also, as the object would be represented as a volume, no destructive write to the depth buffer is required.
JohnH said:Ailuros said:I understood your point perfectly well; my question was going elsewhere and that namely in the grid pattern direction.
I'd probably should ask first if past a certain amount of samples (let's say 16x) a sparse grid really makes any significant difference anymore, yet I can theoretically see a very high edge equivalent resolution even on just an 8x sparsely sample MSAA pattern for example ( 8*8 ).
Clearly the more samples the better, yet I don't think the resulting grid is irrelevant after all.
It was just an example. However, the difference can be quite subtle but when you see it back to back you'd always go for the higher sample rate. You also need to stop thinking purely in terms of triangle edges...
John.
AFAIK that's what R300 does. Screen spaces is subdivided in tiles, with each modulo N tile assigned to a quad engine, where N is the number of quad engines.MfA said:I assume that immediate mode renderers which are be able to do it will use screen space interleaving ala Bitboys to avoid dependency issues (or to give it it's formal name, sort middle).
My my! That is impressive. Zero clocks equals infinite fill rate. With that, why did they bother putting more than one texture unit....?Panajev2001a said:Simon F said:how many clocks does it take for each read/modify/write process?
If you have two, small sequentially adjacent translucent polygons that overlap each other, do you get full performance?
Ahem... how do I break it to him... it has... well... like... 0.
This would eliminate problems with data hazards that I was trying to described in the edited text. Unfortunately, it does it by deliberately slowing down the system.nAo said:Unfurtunately it can't.MfA said:Can the rasterizer in the graphics synthesizer even work on multiple primitives at a time?
And that's nothing! What about a completely unaware of memory pages triangles rasterizer ?Simon F said:This would eliminate problems with data hazards that I was trying to described in the edited text. Unfortunately, it does it by deliberately slowing down the system.
Presumably that would then explain the recommendation of using smallish triangles.nAo said:And that's nothing! What about a completely unaware of memory pages triangles rasterizer ?Simon F said:This would eliminate problems with data hazards that I was trying to described in the edited text. Unfortunately, it does it by deliberately slowing down the system.