AJ said:Yes.
Cheers,
AJ
This was the most informative post in the whole thread. Or at least it's in top 3. you just need to know how to read between the lines. ( and there's 5 of them. )
and no, I am not joking.
AJ said:Yes.
Cheers,
AJ
AJ said:Yes.
Cheers,
AJ
Ailuros said:If the TBDR advantages in small embedded/UMA architectures can't be surpassed, then theoretically at least other IHVs could easily consider using TBDRs of their own. As long as it wouldn't infringe IMG's patents, I don't see a problem there.
Nappe1 said:based on paper specs, no one has anything that can compete with SGX right now. G40 is the closest thing, but it's slower and less flexible. (though both have full OpenGL ES 2.0 support, SGX goes beyond that.)
ddes said:I believe it's not just IMG's patents, but also Gigapixel's patens, which are now owned by NVIDIA. I think IMG is also affected by these patents.
ddes said:I believe it's not just IMG's patents, but also Gigapixel's patens, which are now owned by NVIDIA. I think IMG is also affected by these patents.
TEXAN said:TBR and TBDR are two different things.
Nobody has access to TBDR apart from IMG.
The Giga3D architecture processes the geometry in a different manner. The front of the rendering pipeline is very
much like a classical architecture. Following that is a binning and visibility culling system, and then a tile-based
rasterization system. In the case of the Giga3D architecture, primitives are accepted immediately without waiting
for rasterization. The Geometry/Binning/Visibility and Rasterization (or tiling) systems operate in parallel, but
normally on two separate frames. While binning frame N, frame N-1 is being rasterized. This means that before
rasterization begins, all required textures have been seen and work can be done to make them available as soon as
they are needed.
• Geometry processing
– Vertices are transformed to viewing space
– The primitive is clipped to fit the screen
– Vertex-lighting is computed and applied
• Binning and visibility
– Each primitive and its required information is associated with all of the screen tiles it touches
– All primitives and pixels which are covered by opaque surfaces are discarded
• Rasterization—performed per tile for all primitives that affect the tile
– Iterates colors
– Iterates texture coordinates
– Performs texture look-up and filtering operations
– Reads on-chip Z-buffer to get current depth value
– Compares Z-buffer depth to primitive depth—if primitive is closer, writes color and Z to the on-chip
tile-buffer
– After all primitives have been rendered, writes Color and, optionally, Z data to the external frame-buffer
In this case, the multi-sample calculations are done in parallel. This is possible because they access only a small but
very wide local tile buffer. Normal operation never reads Z values from external memory and only writes Color
values for the completed tile. Since there is a frame of latency between binning and rasterization, the system can
automatically tell when the application will do a Z read-back and can write the final Z values to the frame-buffer in
that case.
The fact that there is a frame of latency between the binning and rasterization steps provides the opportunity to
optimize performance for each scene. The primitives can be accepted very fast since they don’t have to be rasterized
immediately. Performance is more consistent much like systems that use triple-buffered rendering. As mentioned
above, Z values are only stored off-chip when read-back is requested. Texture management is greatly improved as
well. The device driver knows all textures that will be used before the scene is rasterized. It is then able to make
optimal decisions about which textures should be resident and which should come from host memory.
Simon F said:I don't think it was deferred.
Ailuros said:I wasn't immediate in the strict sense either with a binning engine the way I can understand it.
Speaking of, isn't the proposed OGL_ES2.0 texture compression scheme from Ericsson?
ddes said:I believe in mobile space you have to compare the performance/size/power ratios rather than pure performance, which is quite insignificant.
It's an updated version of the one in their presentation at Siggraph 2003.Nappe1 said:PACKMAN compression perhaps?
that's the one comes to my mind...
Nappe1 said:yeah, but from this generation, (G40, SGX, Mali200) does any of the companies give any public information about power consumption numbers? we could (as always) speculate that SGX should not be as power hungry as MBX was, but there's no raw fact on that.
Looks like I'll have to do more work on the PVR-TC compressor.3N1gM4 said: