First off, I apologize for the tardy reply. I've been busy and without sleep for new days, I could hardly justify a post when there were more important things at hand.
DaveBaumann said:
That’s all well and good, but we’re not there yet – and I doubt Cell will be what Tim was talking about either.
So, If I make a comment based on my experience and what I project analogous scenarios in the future to be like, it doesn't apply unless I specifically call out that particular situation? For example, if Mario Tricoci say, "
I love brunettes, the future of hair color style is dark." Concurrently a girl is thinking about changing her color to dark brown... does that guidance and experience from an expert in the field not apply just because Mario wasn't specifically talking about her?
What Tim said, while not in direct responce to PS3/
Broadband Engine, still applies if it falls under the ideas and comments he (as a professional) makes.
Dave said:
I think I’ve been playing games for some time with physics in for a long while – how is this an issue?
I specifically stated physics as calculated in a shader on the GPU. How is what you stated in anyway relevant?
Dave said:
Why would you want physics at the shader level? You’ll be dealing with nearly per fragment data back and forth there that will cause horrific volumes of data and calculation.
Well, you're keen at pointing out that I don't have first hand experience (which, oddly enough goes both ways), so lets ask someone who posts here and does.
Humus's Demo, 11.20.03 which contains physics as computed fully in a shader, on a GPU, which creates the disturbances.
Or, alternatively, you can find the thread in your 3D Technology & Hardware Forum in which this is debated (I can't find it exactly). In which they used a flag and it's physics as an example.
Dave said:
Vince said:
Yet, the core reasons for a 3D IC still stand as there are huge speed-ups to be had by implementing the most basic of iterative functions in logic constructs.
Basic? Not necessarily, focused – yes.
Semantics. I'd propose that what should be implemented in hardware are the functions whose computational requirements are linear. Anything else that is a higher requirements and scales arbitrarily faster, perhaps articulated as - O(Shader) - shouldn't be bound to arbitrary constructs (eg. Fragment Shader as opposed to Vertex, or an isolated PPP).
What gets me is what you do again in your next response – this arbitrary definition of “dedicated†logic as opposed to “programmable.†It just doesn’t make sense to me, perhaps if you could honestly explain it to me because the last ‘front-end’ TCL I’ve seen that’s dedicated was DX7. Everything since has been more and more grey – to the point where I need to ask (and still waiting) for you to explain how a VU/APU differs from today’s R300 VS. Because, according to you, one is dedicated and one is programmable, but when you get down and dirty where’s the huge difference in actual logic?
For the same reason you always have – they are faster at their designed task than general purpose ones.
Why? More specifically, is this always true or task dependant? Faster that what? Again, the constant and linear functions should be implemented in hardware, but beyond that - in the day of long and numerous shaders - whose to say that an architecture like
Cell or Phillip's
CAKE is inferior than an ATI/nVidia Vertex Shader?
What I find most peculiar about your position is that if true, we'd all be utilizing the fixed functionality that was DX7 and previous because of how fast it was on a per capita basis. Already the movement towards shaders has driven us away from the core ideology you hold (that fixed is faster than programmable) - So, if you acknowldge (as you must) that programmability is more desirable than strict functionality - why stop where DX9/Next is? Because ATI is?
Dave said:
Vince said:
Since you seem fond of asking for my 'expert' opinion on matters, perhaps I'll invoke your years of working at an IHV (like the B3D founders) and ask a simple question, 'Why?
What's so unique, as per the logic constructs, of an ATI part that's running a shader/microprogram as opposed to, say VU1 or an APU?
Are these operating at the fragment level?
At this point No, but within a year or two - Yes. Question still stands, run with it.
Of course DX Next is an evolution of DX, however I’m asking where the deficiencies are – you’ve failed to point any out as yet. All you’ve said so far equates to “its an evolution, tsk, tsk, can’t be good†without explaining what pitfalls there are to the approach that its taking.
Ok, well for starters I don't believe they've consolidated computational resources as well as they could have. They effectively merged the shaders due by unifying the syntax models, but why stop there? Then there they have added Topology Support and increased Tessellation which compose your PPP. Why are these separate constructs? Do you think it will remain so?
It still appears to be a legacy PC ideology where you have big pools of resources and little bandwidth between them. Where you have everything 3D on one IC (GPU) and everything else on the other IC (CPU) – Ok, but is this most optimal use of resources?
And, please, tells us what’s so different between OGL2.0 and DX Next (there are areas where what’s available in DX already surpasses the OpenGL2.0 specification, which still isn’t here yet).
You have about 4-5 threads in the 3D forum talking about this to some extent. Including one which talks about preferring OGL over the DX model which still hasn't died.
Dave said:
Vince said:
Are you going to tell me that without Einstein and his Invarience/Relativity Theories our concepts of the universe would have just naturally evolved? A similar progression applies to all areas of study - including those in the 3D realm.
And these analogies are worthless until we see what happens – you can quote some thing that don’t really bear much relevance to the topic at hand, and as Dio point out you can also quote a whole bunch of failures. I could also pick an infinite amount of things that are inordinately successful from progressive development and refinement, but ultimately it doesn’t really assist this discussion.
Everything is worthless untill you observe it if you want to play that game. And it does bear relevance as far as showing that ideologies which are progressions of previous thoughts are never revolutionary or never exceed the status quo by a large amount.
This is what Epistemology deals with, what your stating (eg. basic argument point) basically lies in opposition to current thinking which holds that knowledge expansion is an asymetrical affair in which mostly linear advanced happen that create the bulk of our understanding, yet there must be a small disturbance from which the truely revolutionary progressions are made. This then raises the status quo for the linear thinking. And the cycle repeats.
This holds for all fields of human endeavor, 3D graphics and all. What Dio is stating is a fundamental tenet of what I'm stating, you can't have just revolutionary ideas. Yet, DX has basically extinguished any chance of a single new, radical, ideology breaking out on the PC. Thank the Lord for consoles.
Dave said:
Vince said:
Granted what we think changes with time, but back then you were supporting the 3dfx route (fillrate and bandwith vis-a-vis VSA) over the rudimentary and mostly useless TCL solutions (nVidia's NV10).
I think you’ll be hard pushed to find me say it was useless – I questioned the need fo it then. And, B3D as a whole was asking for programmable.
Fine, I'll grant you that. Yet, it's hypocritical to make comments against the design of PS2 - which was the ideology you were supporting (vis-a-vis 3dfx) at that time over what was possible given the limitations.
Dave said:
<Shrug> From a hardware point of view you can clearly see that it took developers a much longer time to get up to speed with PS2, and this still appears to be the case given the performance thread here.
Perhaps. Most likely true. Yet, this has little to do with the final results as seen by the consumer. In the end, the PS2 is still getting the games, still getting the content. In the end, it's economics - not hardware.