Next gen consoles: hints?

DaveB said:
Again - how are discrete blocks actually an issue (if this is whats to happen)? As I explained DirectX can already effectively limit the required processing to those blocks, and this is only extended firther with DX10.
I think "issue" is too strong a wording as it implies one approach is inherently bad in some way and I don't believe that.
And to be a little presumptious and answer this instead of Vince - I believe he is using the same reasoning which drove the move from hardwired to programmable... and is also driving different architecting approaches in other, more or less loosely related areas - eg. reconfigurable logic or intentional programming, to name two.

I for one like the idea of added flexibility when possible, but I also completely agree that it's a valid question just how much of a gain (if any) that really makes against more "conventional" stuff (if we call it like that).

LondonBoy said:
I'm sorry to say, i will boycott the next gen of consoles if it's not EASY to get proper hi-res output (here in Europe, where it stil is a big problem for some gamers)
Unless hw maker enforces some kind of policy for hi-res games, or the consoles are so powerfull that higher resolution is effectivelly free, it's not going to happen.

Btw Deano, you're working on XBox nowadays? :)
 
Yes, Faf and Deano, i know it's a TV problem and not a console problem, but i think "putting the option in there" would not hurt anyone now would it...

I mean, DC's VGA output didn't hurt anyone did it? The games were defaulted at 480i for everyone, then if one consumer wanted to switch to VGA, then he could. That was almost 5 years ago.

How is "putting the option in there" excluding part of the costumers, if the games are internally rendered at hi-res anyway (IF that will be the case of course)? Don't know, it just sounds stupid to me...
 
london-boy said:
I mean, DC's VGA output didn't hurt anyone did it? The games were defaulted at 480i for everyone, then if one consumer wanted to switch to VGA, then he could. That was almost 5 years ago.

Because if you're using resources that won't affect much of your userbase, then you're denying resources that COULD be used to improve the experience for the vast majority of your users. And if one is designing a console around it, it MIGHT pay off in the long run provided the majority of people utilize a feature before your next console (banking on future utilization that will keep people interested), but you might also simply be denying resources or increasing hardware costs to do things no one except for true enthusiasts see, thereby lowering your ability to compete with your competition a bit--either technologically or financially or more.

Ultimately THAT won't make a helluva lot of difference either, as it is just one small factor in the overall success of a console, but it's always going to be one of the factors that the console makers and game developers make when releasing their products, and it's certainly a long-range guessing game.
 
MrSingh said:
Ty said:
Hmm, might not be one CPU. ;)

hey, I've heard of that rumour too. :p


Is the rumour how many seperate CPU dies or the number of CPU cores per die? Dual cores should be starting to become more common by the end of 2005. IBM might even be able to pull off a very cut down quad core CPU on a .65 nm process cheap enough for Microsoft.

I thought the Xbox 2 design was a floating design depending on the year of launch?
 
Sonic said:
The consoles are definitely holding you back there. That's what happens when your largest demographic is on consoles. Still off topic though.

Yup. And the same is is the case across the consoles as well.

Fafalada said:
I think "issue" is too strong a wording as it implies one approach is inherently bad in some way and I don't believe that.

No, neither do I - they are just different.

Fafalada said:
And to be a little presumptious and answer this instead of Vince - I believe he is using the same reasoning which drove the move from hardwired to programmable... and is also driving different architecting approaches in other, more or less loosely related areas - eg. reconfigurable logic or intentional programming, to name two.

I for one like the idea of added flexibility when possible, but I also completely agree that it's a valid question just how much of a gain (if any) that really makes against more "conventional" stuff (if we call it like that).

Even if we remain with a "CPU" and a "graphics core", given the directions MS are taking don't we have the right kinds of flexibility and programmability? With DX10 you can more or less reach the point where the physics and AI are controlled by the CPU and the graphics are night on completely controlled by the 3D pipeline, which seems like reasonable boundaries for the types of jobs to take place - is there much reason to break beyond this?

On the flipside - given the direction Sony appears to have taken so far is there not the possibility that they are trading off programmability at one end for quite fixed function at the other (raster) end? In which case, is one any more desirable than the other?
 
london-boy said:
I mean, DC's VGA output didn't hurt anyone did it? The games were defaulted at 480i for everyone, then if one consumer wanted to switch to VGA, then he could. That was almost 5 years ago.

How is "putting the option in there" excluding part of the costumers, if the games are internally rendered at hi-res anyway (IF that will be the case of course)? Don't know, it just sounds stupid to me...

DC VGA did hurt a little and towards the end of its life, a few developers were starting to regain the fillrate from filling a 640x480 buffer over a 640x240 field adjusted buffer. Probably the main reason it wasn't used that much was a hardware quirk meant you didn't get the extra RAM back by using the smaller buffer.

If we really get a complete free HDTV option we will use it BUT if were going to impair visuals (fillrate expensive things like particles and post-processing) for the majority of consumers its always going to be way down the list of prioritys.
 
Fafalada said:
akira said:
I know it's not 100 percent fair (250nm vs 220nm) but still, that is a wide gap in die size (over 2x) that doesn't exactly show up on the screen in any way, shape, or form.
You are talking about a GF1? Ok, I'll bite, this is based on what?

Faf, I get my numbers from here -

Pentium III specs - www.sandpile.org/impl/p3.htm
Playstation2 press release - www.otherlandtoys.co.uk/ps2press.htm
The only chip die area I can't find again was the GF256 die size, although all NV chips at that time were around ~12x12mm.
 
DeanoC said:
DC VGA did hurt a little and towards the end of its life, a few developers were starting to regain the fillrate from filling a 640x480 buffer over a 640x240 field adjusted buffer. Probably the main reason it wasn't used that much was a hardware quirk meant you didn't get the extra RAM back by using the smaller buffer.

That’s weird, I’m sure I heard something about Melbourne House using field buffering in Test Drive: LeMans, to get VRAM enough to store rip-maps.
 
Squeak said:
That’s weird, I’m sure I heard something about Melbourne House using field buffering in Test Drive: LeMans, to get VRAM enough to store rip-maps.

The exact story was they were planning to use field rendeering in their next-after-LMans game to improve polycount. The game was never started.
 
DaveBaumann said:
Again - how are discrete blocks actually an issue (if this is whats to happen)? As I explained DirectX can already effectively limit the required processing to those blocks, and this is only extended further with DX10.

However, surely even the Cell based system will have some discrete elements (i.e. processor and raster back end).

Faf already touched upon this, but I’ll answer the later sentence. I, in an ideal world, still believe that Tim Sweeney’s future is a sound progression. 3D acceleration came into existence for reasons which, IMHO are sound, but as the IHVs ICs progressed they took on tasks for valid reasons then which are beginning to disappear - namely the move to programmability throughout the ‘pipeline’. Why you want arbitrary bounds between the CPU and GPU isn’t catching on with me - physics for example. What good are physics calculated in a Shader on a GPU if it doesn’t affect the physics calculations on the CPU? How limited in use is that forward looking?
Yet, the core reasons for a 3D IC still stand as there are huge speed-ups to be had by implementing the most basic of iterative functions in logic constructs. But, outside of filtering and sampling (which the B3D consensus seems to be that it will remain static and away from any truly randomly stochastic implementation for many moons) why would you want arbitrarily defined resources?

At the moment I'd say it would be tough for Sony to reach ATI's fragment shading potential.

Since you seem fond of asking for my 'expert' opinion on matters, perhaps I'll invoke your years of working at an IHV (like the B3D founders) and ask a simple question, 'Why?

What's so unique, as per the logic constructs, of an ATI part that's running a shader/microprogram as opposed to, say VU1 or an APU?

Again, you come out with such things Vince, but where is your expertise on making such statements. You've not even effectively seen what DX9 can do yet and you write off DX10 as "mediocre". These are baseless words.

Cute. Then you're DXNext recap sucked, because explain to me how DXNext is anything but a linear extension of where DX has been headed. Can you also say, in your time tested opinion, that if all legacy requirements were removed from DX (as is theoretically possible on a console) - the best you could come up with is DXNext? At the very least, I'm sure you'd get all those OGL2 supporters to question you.

No, its different from what Sony promises. No doubt Sony's solution will be restirictive in other areas.

This isn't even debatable, of course it will. But, I'd assume it's where those restrictions reside that will matter.

I didn't ask if you find it boring I asked how this was an issue, something which you failed to answer.

Whoa.. Alright. This was the dialogue:

Dave said:
How is progression from whats known and accepted a bad thing?

Vince said:
Personally, I find this to be a boring aspect. Ultimately it's protected from defeat by the fact that it's the conservative strategy, but if you look at the history of human knowledge and advancement you'll find that although the vast majority of discoveries were of your linear type, they'd never yield the paradigm shift that someone like Democritus, Kepler/Newton, Frege/Russell or Einstein (among countless others) did.

Ok Dave, seriously what more do you want me to tell you? Something about you being a galactic hero of 3D? My further writing was explaining the known ideology that knowledge expansion is a mostly linear affair, yet it's the people who starts with a clean slate and take a chance that change the world. So, what I'd hope you'd extract from this when I wrote it is that 'What's wrong" with this strategy is that you never advance to dramatically better methologies. Are you going to tell me that without Einstein and his Invarience/Relativity Theories our concepts of the universe would have just naturally evolved? A similar progression applies to all areas of study - including those in the 3D realm.

How this is conservative, I don't really know - remember, Sony's route appears to be more traditional in the shading sense. The shader route that consumer graphics/DX have taken in terms of high powered fragment shading is the one that deviating from the traditional path
.

How quickly we forget that PS2 launched in early 2000 - definitely Sony whose designs were so backwards and traditional compared to, say what 3dfx was doing.. an ideology and 3D progression that - you - supported. I mean, lets try to be consistent bud. Granted what we think changes with time, but back then you were supporting the 3dfx route (fillrate and bandwith vis-a-vis VSA) over the rudimentary and mostly useless TCL solutions (nVidia's NV10).


Developers already had a large time to gain some understanding of PS2 by the time competiting hardware came out. But then, look at the arguments of DC vs PS2 back when it was released.

Exactly, look at the arguments about how the DC was easier to program, with a better programming model, better "features" and a more orthodox architecture. And, yet, look whose courting the developers.

It's the same argument as the early XBox one about how programmers would give-up on the PS2 and move to the XBox because it's so much easier and more tolerant of their needs and schedules. As I stated before, this is BS - reread my reasons, I don't want to type anymore.
 
3D acceleration came into existence for reasons which, IMHO are sound, but as the IHVs ICs progressed they took on tasks for valid reasons then which are beginning to disappear - namely the move to programmability throughout the ‘pipeline’

That’s all well and good, but we’re not there yet – and I doubt Cell will be what Tim was talking about either.

Why you want arbitrary bounds between the CPU and GPU isn’t catching on with me - physics for example. What good are physics calculated in a Shader on a GPU if it doesn’t affect the physics calculations on the CPU? How limited in use is that forward looking?

I think I’ve been playing games for some time with physics in for a long while – how is this an issue?

Why would you want physics at the shader level? You’ll be dealing with nearly per fragment data back and forth there that will cause horrific volumes of data and calculation.

Yet, the core reasons for a 3D IC still stand as there are huge speed-ups to be had by implementing the most basic of iterative functions in logic constructs.

Basic? Not necessarily, focused – yes.

But, outside of filtering and sampling (which the B3D consensus seems to be that it will remain static and away from any truly randomly stochastic implementation for many moons) why would you want arbitrarily defined resources?

For the same reason you always have – they are faster at their designed task than general purpose ones.

Since you seem fond of asking for my 'expert' opinion on matters, perhaps I'll invoke your years of working at an IHV (like the B3D founders) and ask a simple question, 'Why?

What's so unique, as per the logic constructs, of an ATI part that's running a shader/microprogram as opposed to, say VU1 or an APU?

Are these operating at the fragment level?

Then you're DXNext recap sucked, because explain to me how DXNext is anything but a linear extension of where DX has been headed. Can you also say, in your time tested opinion, that if all legacy requirements were removed from DX (as is theoretically possible on a console) - the best you could come up with is DXNext? At the very least, I'm sure you'd get all those OGL2 supporters to question you.

Of course DX Next is an evolution of DX, however I’m asking where the deficiencies are – you’ve failed to point any out as yet. All you’ve said so far equates to “its an evolution, tsk, tsk, can’t be good†without explaining what pitfalls there are to the approach that its taking.

As for “legacy†requirements in DX Next, what are they? What’s “legacy†in DX Next that is an issue? DX Next isn’t legacy – previous versions of DX exist as entire legacy parts to the runtime, but that doesn’t mean that the evolution of the DX Next API itself is legacy. Point out what is legacy and why its an issue.

And, please, tells us what’s so different between OGL2.0 and DX Next (there are areas where whats available in DX already surpasses the OpenGL2.0 specification, which still isn’t here yet).

Are you going to tell me that without Einstein and his Invarience/Relativity Theories our concepts of the universe would have just naturally evolved? A similar progression applies to all areas of study - including those in the 3D realm.

And these analogies are worthless until we see what happens – you can quote some thing that don’t really bear much relevance to the topic at hand, and as Dio point out you can also quote a whole bunch of failures. I could also pick an infinite amount of things that are inordinately successful from progressive development and refinement, but ultimately it doesn’t really assist this discussion.

Granted what we think changes with time, but back then you were supporting the 3dfx route (fillrate and bandwith vis-a-vis VSA) over the rudimentary and mostly useless TCL solutions (nVidia's NV10).

I think you’ll be hard pushed to find me say it was useless – I questioned the need fo it then. And, B3D as a whole was asking for programmable.

Exactly, look at the arguments about how the DC was easier to program, with a better programming model, better "features" and a more orthodox architecture. And, yet, look whose courting the developers.

<Shrug> From a hardware point of view you can clearly see that it took developers a much longer time to get up to speed with PS2, and this still appears to be the case given the performance thread here.
 
BTW, about the whole ps2 versus xbox stuff(in terms of power and release dates), well if microsoft released xbox around the same time as ps2, couldn't they have had a geforce 2 ulta, and maybe saved enough(or have been forced to) upgrade the memory to 128 MB. While it would have sacrificed the pixel shaders, it would have kept most of the performance, and maybe instead of halo graphics for halo, we would have had unreal tournament 2003 graphics.(great graphics through good texturing, not pixel shading)
I'm assuming geforce 2 ultra was ready at this point though(for US release, like how MS released the xbox, so they would still have a bit of extra dev time over sony), as I remember dreamcast came out shortly before the original geforce, and I think xbox was like 6 months after geforce 3.(I got my ti 200 around the same time xbox came out, and I know the original geforce 3 came out way before that)


Oh, and maybe sony/nintendo/microsoft will release multiple versions of their consoles, one that supports 480i and 480p, and one that has memory expressely for the purpose of supporting HDTV reses.(might have to be clocked higher too) But say a $200-$300 consoles for the mass market, and a $300-$500 for those who must have the best.


BTW, why are Direct X and OpenGL so limited? I would imagine the drive to improve them would be based on whoever is producing hardware at the time, and while OpenGL may have to deal with companies that produce big expensive mainframes, the main users of DirectX seem to be Nvidia and ATI. And if there is a seperate low level PC version of OpenGL, I would imagine it would also closely follow the development of Nvidia and ATI's hardware.
 
Vince said:
It's the same argument as the early XBox one about how programmers would give-up on the PS2 and move to the XBox because it's so much easier and more tolerant of their needs and schedules. As I stated before, this is BS - reread my reasons, I don't want to type anymore.
The effect of a highly difficult to program architecture will not be to reduce support - support will be a given assuming a console reaches critical mass, which it seems likely Sony and Microsoft will achieve with the next generation.

Instead, it will be to reduce the apparent typical capability of the hardware, as less-than-elite programmers will not be able to achieve the console's true potential.

Of course, if the games are designed for the lowest common generational denominator - which might be more determined by memory size, texture / geometry compression, etc. than by 'performance' - there will be no significant visible difference between titles.

If there would be a worrying aspect, it might be that highly innovative game concepts move away from 'awkward' architectures onto easier to program architectures. Conversely, highly innovative technical titles might move onto the awkward architecture if it offers additional opportunities for the technology. It's Swings & Roundabouts.
 
[URL said:
http://gamesradar.msn.co.uk/news/default.asp?subsectionid=158&articleid=66286&pagetype=2[/URL]]First PS3 [and XBox2] game confirmed

[10/12/2003: 09:31]
Gran Turismo 5? Metal Gear Solid 4? Grand Theft Auto 4? Nope. It's a "vehicle-based game" from Climax
Brighton-based developers Climax have become the first company to confirm that they're currently working on a game for PlayStation 3. Codenamed Avalon, the game is also planned for release on Xbox 2.

The vehicle-based affair apparently sees the studio branching out to include land, sea and air vehicles. Blimey.

Beyond that - and the usual bumf about "groundbreaking graphics, sensational sound and gripping gameplay" - Climax have only stated that the game is "already at a remarkably advanced stage of development with a fully playable online prototype housed at the Brighton office."

However, Climax have yet to secure a publisher for the game. We'll bring you more news on PS3, Xbox 2 and, indeed, Avalon as soon as we have it.
 
ChryZ said:
[URL said:
http://gamesradar.msn.co.uk/news/default.asp?subsectionid=158&articleid=66286&pagetype=2[/URL]]First PS3 [and XBox2] game confirmed

[10/12/2003: 09:31]
Gran Turismo 5? Metal Gear Solid 4? Grand Theft Auto 4? Nope. It's a "vehicle-based game" from Climax
Brighton-based developers Climax have become the first company to confirm that they're currently working on a game for PlayStation 3. Codenamed Avalon, the game is also planned for release on Xbox 2.

The vehicle-based affair apparently sees the studio branching out to include land, sea and air vehicles. Blimey.

Beyond that - and the usual bumf about "groundbreaking graphics, sensational sound and gripping gameplay" - Climax have only stated that the game is "already at a remarkably advanced stage of development with a fully playable online prototype housed at the Brighton office."

However, Climax have yet to secure a publisher for the game. We'll bring you more news on PS3, Xbox 2 and, indeed, Avalon as soon as we have it.



mmmm... Climax were also "the first ones with a racing engine for ps2 that could push in excess of 20million polygons with hi-res textures and Anti-aliasing" in 2000 which of course was never released or used or ever heard of it thereafter for that matter.
 
london-boy said:
mmmm... Climax were also "the first ones with a racing engine for ps2 that could push in excess of 20million polygons with hi-res textures and Anti-aliasing" in 2000 which of course was never released or used or ever heard of it thereafter for that matter.
Well, it's not all bad ... Climax did quite a decent job with their MotoGP2 X-Box port.
 
First off, I apologize for the tardy reply. I've been busy and without sleep for new days, I could hardly justify a post when there were more important things at hand.

DaveBaumann said:
That’s all well and good, but we’re not there yet – and I doubt Cell will be what Tim was talking about either.

So, If I make a comment based on my experience and what I project analogous scenarios in the future to be like, it doesn't apply unless I specifically call out that particular situation? For example, if Mario Tricoci say, "I love brunettes, the future of hair color style is dark." Concurrently a girl is thinking about changing her color to dark brown... does that guidance and experience from an expert in the field not apply just because Mario wasn't specifically talking about her?

What Tim said, while not in direct responce to PS3/Broadband Engine, still applies if it falls under the ideas and comments he (as a professional) makes.

Dave said:
I think I’ve been playing games for some time with physics in for a long while – how is this an issue?

I specifically stated physics as calculated in a shader on the GPU. How is what you stated in anyway relevant?

Dave said:
Why would you want physics at the shader level? You’ll be dealing with nearly per fragment data back and forth there that will cause horrific volumes of data and calculation.

Well, you're keen at pointing out that I don't have first hand experience (which, oddly enough goes both ways), so lets ask someone who posts here and does.

Humus's Demo, 11.20.03 which contains physics as computed fully in a shader, on a GPU, which creates the disturbances.

Or, alternatively, you can find the thread in your 3D Technology & Hardware Forum in which this is debated (I can't find it exactly). In which they used a flag and it's physics as an example.


Dave said:
Vince said:
Yet, the core reasons for a 3D IC still stand as there are huge speed-ups to be had by implementing the most basic of iterative functions in logic constructs.

Basic? Not necessarily, focused – yes.

Semantics. I'd propose that what should be implemented in hardware are the functions whose computational requirements are linear. Anything else that is a higher requirements and scales arbitrarily faster, perhaps articulated as - O(Shader) - shouldn't be bound to arbitrary constructs (eg. Fragment Shader as opposed to Vertex, or an isolated PPP).

What gets me is what you do again in your next response – this arbitrary definition of “dedicated†logic as opposed to “programmable.†It just doesn’t make sense to me, perhaps if you could honestly explain it to me because the last ‘front-end’ TCL I’ve seen that’s dedicated was DX7. Everything since has been more and more grey – to the point where I need to ask (and still waiting) for you to explain how a VU/APU differs from today’s R300 VS. Because, according to you, one is dedicated and one is programmable, but when you get down and dirty where’s the huge difference in actual logic?

For the same reason you always have – they are faster at their designed task than general purpose ones.

Why? More specifically, is this always true or task dependant? Faster that what? Again, the constant and linear functions should be implemented in hardware, but beyond that - in the day of long and numerous shaders - whose to say that an architecture like Cell or Phillip's CAKE is inferior than an ATI/nVidia Vertex Shader?

What I find most peculiar about your position is that if true, we'd all be utilizing the fixed functionality that was DX7 and previous because of how fast it was on a per capita basis. Already the movement towards shaders has driven us away from the core ideology you hold (that fixed is faster than programmable) - So, if you acknowldge (as you must) that programmability is more desirable than strict functionality - why stop where DX9/Next is? Because ATI is? ;)

Dave said:
Vince said:
Since you seem fond of asking for my 'expert' opinion on matters, perhaps I'll invoke your years of working at an IHV (like the B3D founders) and ask a simple question, 'Why?

What's so unique, as per the logic constructs, of an ATI part that's running a shader/microprogram as opposed to, say VU1 or an APU?

Are these operating at the fragment level?

At this point No, but within a year or two - Yes. Question still stands, run with it.

Of course DX Next is an evolution of DX, however I’m asking where the deficiencies are – you’ve failed to point any out as yet. All you’ve said so far equates to “its an evolution, tsk, tsk, can’t be good†without explaining what pitfalls there are to the approach that its taking.

Ok, well for starters I don't believe they've consolidated computational resources as well as they could have. They effectively merged the shaders due by unifying the syntax models, but why stop there? Then there they have added Topology Support and increased Tessellation which compose your PPP. Why are these separate constructs? Do you think it will remain so?

It still appears to be a legacy PC ideology where you have big pools of resources and little bandwidth between them. Where you have everything 3D on one IC (GPU) and everything else on the other IC (CPU) – Ok, but is this most optimal use of resources?

And, please, tells us what’s so different between OGL2.0 and DX Next (there are areas where what’s available in DX already surpasses the OpenGL2.0 specification, which still isn’t here yet).

You have about 4-5 threads in the 3D forum talking about this to some extent. Including one which talks about preferring OGL over the DX model which still hasn't died.

Dave said:
Vince said:
Are you going to tell me that without Einstein and his Invarience/Relativity Theories our concepts of the universe would have just naturally evolved? A similar progression applies to all areas of study - including those in the 3D realm.

And these analogies are worthless until we see what happens – you can quote some thing that don’t really bear much relevance to the topic at hand, and as Dio point out you can also quote a whole bunch of failures. I could also pick an infinite amount of things that are inordinately successful from progressive development and refinement, but ultimately it doesn’t really assist this discussion.

Everything is worthless untill you observe it if you want to play that game. And it does bear relevance as far as showing that ideologies which are progressions of previous thoughts are never revolutionary or never exceed the status quo by a large amount.

This is what Epistemology deals with, what your stating (eg. basic argument point) basically lies in opposition to current thinking which holds that knowledge expansion is an asymetrical affair in which mostly linear advanced happen that create the bulk of our understanding, yet there must be a small disturbance from which the truely revolutionary progressions are made. This then raises the status quo for the linear thinking. And the cycle repeats.

This holds for all fields of human endeavor, 3D graphics and all. What Dio is stating is a fundamental tenet of what I'm stating, you can't have just revolutionary ideas. Yet, DX has basically extinguished any chance of a single new, radical, ideology breaking out on the PC. Thank the Lord for consoles.

Dave said:
Vince said:
Granted what we think changes with time, but back then you were supporting the 3dfx route (fillrate and bandwith vis-a-vis VSA) over the rudimentary and mostly useless TCL solutions (nVidia's NV10).

I think you’ll be hard pushed to find me say it was useless – I questioned the need fo it then. And, B3D as a whole was asking for programmable.

Fine, I'll grant you that. Yet, it's hypocritical to make comments against the design of PS2 - which was the ideology you were supporting (vis-a-vis 3dfx) at that time over what was possible given the limitations.

Dave said:
<Shrug> From a hardware point of view you can clearly see that it took developers a much longer time to get up to speed with PS2, and this still appears to be the case given the performance thread here.

Perhaps. Most likely true. Yet, this has little to do with the final results as seen by the consumer. In the end, the PS2 is still getting the games, still getting the content. In the end, it's economics - not hardware.
 
Ok, well for starters I don't believe they've consolidated computational resources as well as they could have. They effectively merged the shaders due by unifying the syntax models, but why stop there?

[...]

It still appears to be a legacy PC ideology where you have big pools of resources and little bandwidth between them. Where you have everything 3D on one IC (GPU) and everything else on the other IC (CPU) – Ok, but is this most optimal use of resources?

Can you elaborate further?

DirectX is a front end that exposes hardware capabilities.

In the extreme, you can compile down shaders into native code, and run everything on the host CPU -- which is exactly what refrast does.

So how does DX specify in any way how computation resources are "consolidated"? That seems to me as more of an implementation decision than anything else.
 
What Tim said, while not in direct responce to PS3/Broadband Engine, still applies if it falls under the ideas and comments he (as a professional) makes.

Yes, and Cell isn’t getting to anything like the power that Tim was referencing. Go and read the Tim’s thoughts threads in the 3D Tech forum – you have both graphics and CPU luminaries in there thrashing it out. Simply put, we ain’t there yet.

Humus's Demo, 11.20.03 which contains physics as computed fully in a shader, on a GPU, which creates the disturbances.

You asked why there are arbitary bounds between the CPU and GPU – In the instance you gave what advantages would there be for processing it on the CPU?

For a gaming application there is no reason why the "disturbances" wouldn't be controlled by key points within the CPU, which then are translated into the overall geometrical or per pixel effects - such as is used in many other cases.

What gets me is what you do again in your next response – this arbitrary definition of “dedicated†logic as opposed to “programmable.†It just doesn’t make sense to me, perhaps if you could honestly explain it to me because the last ‘front-end’ TCL I’ve seen that’s dedicated was DX7.

Because there is logic that is going to be more frequently utilized within graphics than other areas – the logic that is being implemented is focused on the types of functions that are going to be most frequently run. For instance, even DX10 comes around branching will be a part of the API, but I suspect that 3D hardware will be nowhere near as efficient as a CPU since its not something that would be expected of developers, however the Shader ALU’s are going to be most focused on the types of instructions that are run most frequently required with graphics applications. (And the old dedicated texturing units will remain for a while yet as these are just not efficient within a program).

Why? More specifically, is this always true or task dependant? Faster that what? Again, the constant and linear functions should be implemented in hardware, but beyond that - in the day of long and numerous shaders - whose to say that an architecture like Cell or Phillip's CAKE is inferior than an ATI/nVidia Vertex Shader?

Why not a fragment shader?

This is the fundamental difference I keep harking back to – I’m trying to find out if PS3 has any dedicated fragment processing abilities, and from what I’ve seen that doesn’t really seem to be the case. While Cell is probably going to be able to check around polygons far better than any Vertex Shader is from what I can tell it going to need to. If this is the case then all the detail that PS3 is delivering is going to have to be done via increased geometry levels – now, I’m sure this is going to be desirable in some areas but disadvantageous in others (i.e. take a per pixel lit poly bump character – “emulation†of detail is very easy through fragment shaders, getting that same level of minute detail via poly’s will require an enormous quanity).

What I find most peculiar about your position is that if true, we'd all be utilizing the fixed functionality that was DX7 and previous because of how fast it was on a per capita basis.

No Vince, I’ve never said anything of the sort – there is a difference between “fixed†and “programmatically focusedâ€. Note, that I’m not necessarily even saying that Cell isn’t this, but it would appear to be focued on something different – its programmability is all up front, and it seems like there will a simple raster backend.

At this point No, but within a year or two - Yes. Question still stands, run with it.

So, Cell is actually going to be generating the Pixels?

Ok, well for starters I don't believe they've consolidated computational resources as well as they could have. They effectively merged the shaders due by unifying the syntax models, but why stop there? Then there they have added Topology Support and increased Tessellation which compose your PPP. Why are these separate constructs? Do you think it will remain so?

The API expose capabilities, it doesn’t define the implementation. Elements such as “PPP†may well make sense for generation within the shader in some cases, other may feel a dedicated unit will be better/faster for what they desire to implement.

It still appears to be a legacy PC ideology where you have big pools of resources and little bandwidth between them. Where you have everything 3D on one IC (GPU) and everything else on the other IC (CPU) – Ok, but is this most optimal use of resources?

This isn’t a weakness of what implemented in DX10 though, you are now talking about more general issues – however what you speak of is not something specific to the PC or DX, its something that is common in the industry throughout. The question is not whether the division is optimal, but whats the overriding need to have anything different and is the market ready.

But, again, what are the specific issues with the directions that DX10 itself has taken?

You have about 4-5 threads in the 3D forum talking about this to some extent. Including one which talks about preferring OGL over the DX model which still hasn't died.

I asked for you to point something out Vince, not to vaguely redirect me elsewhere. The point I’m getting at is that frequently you make vague derisive references to things without necessarily appearing to have much basis for them. Fundamentally the principals of DX and OGL are following the same path in terms of technology, and as I mentioned even VS3.0 exceeds OGL2.0’s specification at the moment, let alone VS/PS3.0 – IIRC most of the differences people spoke about were ideological differences rather than feature / function differences.

Yet, it's hypocritical to make comments against the design of PS2 - which was the ideology you were supporting (vis-a-vis 3dfx) at that time over what was possible given the limitations.

I’m not making any hypocritical comments on PS2’s design Vince, I’m repeating whats commonly known – is it not a fact that they decided to make the raster end of PS2 more fixed function and shader limited? If so, how is this hypocritical? All I’m saying is that you begin to see the direction that was being sought then – they chose to beef up the front end whilst limiting the back end; basically they chose to focus the most resources up front. It seems to me that what I’ve heard of PS3 is an evolution of the same direction – they appear to be going for more at the front end, requiring less at the back end. Now, I’m positive that this approach will have its advantages and I’m sure that it will have its disadvantages as well.
 
Why not a fragment shader?

This is the fundamental difference I keep harking back to – I’m trying to find out if PS3 has any dedicated fragment processing abilities, and from what I’ve seen that doesn’t really seem to be the case.

I think you're right, that PS3 might not have a dedicated fragment processing abilities, those processors (even if they aren't dedicated for fragment processing) should be able to do fragment processing. It will probably not as fast as dedicated hardware, but the performance maybe good enough.

While Cell is probably going to be able to check around polygons far better than any Vertex Shader is from what I can tell it going to need to. If this is the case then all the detail that PS3 is delivering is going to have to be done via increased geometry levels – now, I’m sure this is going to be desirable in some areas but disadvantageous in others (i.e. take a per pixel lit poly bump character – “emulation” of detail is very easy through fragment shaders, getting that same level of minute detail via poly’s will require an enormous quanity).

If the hardware allowed it, isn't more desirable, to actually take those high res model that they used to generate the normal maps for poly bump character and use that in the game ?

Also aren't we suppose to be seeing per pixel accurate displacement map for VS3.0/PS3.0 ? Wouldn't that generate an enormous quantity of polygon too ?

they chose to beef up the front end whilst limiting the back end

I don't think they're limiting the back end, I think they just concentrated the backend for multipass, to be able to do those cinematic rendering like Voodoo5. I think 3DFX made the same decision back then, its just unfortunate they don't have a beefy front end.
 
Back
Top