karlotta said:Apple has games?
that LCD is not for games...
Yeah that monitor looks quite tasty. Is it a re-badged Big Bertha? (You know, the IBM one from a few years back).
karlotta said:Apple has games?
that LCD is not for games...
SA said:Scaling the geometry processing using multliple GPU cards shouldn't be too difficult if the cards use screen partitioning (as opposed to scan line interleaving). If the application's 3d engine uses efficient culling of geometry to the viewport (say using hierarchical bounding volumes) the scaling happens automatically since geometry outside a card's partition is efficiently culled before being processed.
I don't get what you mean. Why would this be the same?Chalnoth said:Isn't a bounding volume mathematically the same as calculating one coordinate in this case?
AlphaWolf said:Simon F said:An integer fraction should be ok, surely?Ailuros said:The Apple gigantic monitor has a native resolution (or did I get that the wrong way?) of 2560*1600. Anything lower then the native resolution on TFT/LCD monitors isn´t usually a good idea..
Some lcd displays scale better than others but even that would be a problem for that apple as its a bit of an odd resolution and many games seem to lack support for odd resolutions.
Chalnoth said:This is why we need resolution-independent GUI's.
Xmas said:A problem might be analyzing the vertex shader code and stripping the transform calculation from it.
You can still try, and fall back to sending both cards all geometry data if the shader is too complex. Cases where the position is simply the result of a vector-matrix mul are easy to detect and very common.Mephisto said:I don't think "analyzing" vertex shader code is a realistic solution. Games do lots of skeletal animation stuff and sometimes even physics which affect vertex positions ... hard to detect.
I highly doubt it. Post-transform cache is quite small, and the cards don't store transformed vertices in memory. So where should this vertex data go, and how would you compensate the latency if one GPU is waiting for vertex data from the other GPU to continue rendering?The algorithm must be independent of the vertex shader code. I guess some redistribution of vertex data between to two card must be done after vertex shading.
Ailuros said:What games? Even if games would support resolutions beyond 2048*1536 (which isn´t all that common too), performance would be lacklustering anyway.
Maybe it´s just me, but I somehow have the feeling that even though 32" translates into a huge viewing area, 2500*1600 is a tad over the top. I´d most likely opt for 1920*1440 instead, in order to not have to glue my nose on it to read simple text.
I seriously doubt it. It'd take too much bandwidth.Mephisto said:Xmas said:A problem might be analyzing the vertex shader code and stripping the transform calculation from it.
I don't think "analyzing" vertex shader code is a realistic solution. Games do lots of skeletal animation stuff and sometimes even physics which affect vertex positions ... hard to detect.
The algorithm must be independent of the vertex shader code. I guess some redistribution of vertex data between to two card must be done after vertex shading.