Vince said:
I'm willing to bet, 'At the moment' you were resolved enough to comment. Which is why I questioned it and you got defensive.
At that point I was unaware of the layout of what, apparently, is the Visualiser, and as this discussion has evolved there is a considerable lack if understanding from all parties on how the fragment processing is to occur.
The subsequent points that I’ve raised with Panajev, to which he appears to agree with to some extent, is in the most likely “best case usage†scenario in which it looks like much of the resources from the BE will be concerned with geometry processing and the Visualiser mainly fragment processing, in which case in a general usage scenario I think the fragment processing abilities are still a question – in relation to a DX Next implementation that may well be able to alter its resource usage from geometry to fragment processing as demand requires.
Vince said:
And I'm still a bit confused why you'd rather have a DX solution which has arbitray restrictions on logic constructs 'forward' of sampling (which is basically where the next generation will be limited, kinda that open ended O(Shader) concept) than something that doesn't. Why wouldn't you want the entire computational resource 'pool' that's not linearly bounded in task to be unified?
Vince, I’m not necessarily saying I’d rather have anything, I’ve come from the opinion that there are always relative merits and pitfalls to any system.
The obvious point with the DX approach is that it is focused primarily on graphics processing which can be faster at that task than a more general purpose unit. Sure, there will be trade-off’s in other areas and ultimately differing architectures that appear in roughly the same timeframe will probably be fairly well matched given all their relative merits and weaknesses (and 90% of the software written will see to that anyway). I just don’t ascribe to the theorem that DX Next has any fundamental legacy that will necessarily inhibit it over different applications (from what I hear so far DX Next will span a lot more than just PC’s and high end consoles).
And how do you mean restrictions “forward of sampling� in the DX Next pipeline sampling could be one of the first things that you do!
Vince said:
Perhaps you, or others, can help. I took the number off of a somewhat recent (~<3months) ATI Presentation which contained a slide that compared the aggregate FLOPs from the Shader Constructs on the R3x00 line and compared it to the N3x.
I believe the only comparisons to NVIDIA ATI have done is in actual runtime, not theoretical rates. IIRC, 6 float instructions can be achieved in each pixel pipeline and, I think, 2 per Vertex Shader
Vince said:
For example, "we" (as a board more or less) have basically accepted the Suzuoki patent as a constant which we can use as a basis for discussion. Similar to how I intended to use DXNext. What you're doing can also be done by the PS3 side as one can point to the Sukuoki Cell patent and say, "Hey! Preferred Embodiment! They're going to amend it and put another 8 APUs, some nVidia IP for the Shaders, and a small paramecium wheel for power in there". Obviously, doing so doesn't lend itself well to discussion.
No Vince, I’ve not done anything of the sort the – you’re the one who has made the
assumption on how the DX Next platform can or will be implemented, it has always been the case that
we just don’t know about the details of the DX Next implementation within a console environment. All we can say about MS’s implementation is that we believe that they are using the R500 platform as a graphics technology basis and that the DX Next presentations are reasonable grounds to assume that these are the graphics directions that are likely to be taken – your post decrying the “legacy issues†that it brings forth is fundamentally misplaced in the context of a console discussion, as you now appear to admit, because we still do not know about implementation specific details.
While you are basing the Suzuoki patent as a constant for what you believe to be PS3, you cannot do the same for a console based DX implementation because we have no such specific details as yet, hence preconceptions about apparent “legacy†issues that are not related to the structure / direction of the API and are implementation specific are misplaced right now. As has been pointed out a few times already – MS has surprised with the choices hey have made so far and they may well continue to do so yet, so even extrapolating from alternate or previous platforms may not necessarily point to Xbox2.