Is it time to split GPU into two parts again???

Status
Not open for further replies.
I believe now is the time to split GPUs into two physical chips, one implementing vertex shaders and the other implementing pixel shaders+eDRAM.

The reason for this split is that while the performance of vertex shader is largely dependent on clock-speed, the pixel shader performance is dependent on memory bandwidth. While the obvious solution to memory bandwidth limitation is eDRAM, eDRAM process tend to have an ill-effect on the clock-speed and the final part tend to be substantially slower than an equivalent all-logic part, thus hurting the vertex shader performance. Since the path to higher performance is incompatible between vertex shader and pixel shader, it is logical to separate into two parts.

I hope to see XGPU2 as two separate parts, the 2 Ghz VSP chip implementing 8 vertex shaders and the 1 Ghz PSP chip implementing 8 pixel shaders+5.3 MB eDRAM.
 
I find it just a small taste of God's grand irony that your ideology is more inline with SCEI's than Microsoft's, whose DirectX will soon be run on hardware with a unified shader structure.
 
...

I find it just a small taste of God's grand irony that you're ideology is more inline with SCEI's
Not really. GS3 will do both T&L and rendering.

than Microsoft, whose DirectX will soon be run on hardware with a unified shader structure.
I doubt this is true for the sake of backward compatibility(How will this unified shader be able to handle older codes? Rapidly switch context between VS and PS?), not to mention that it hurts performance. It is better to separate VS and PS than unify them.
 
Re: ...

DeadmeatGA said:
I find it just a small taste of God's grand irony that you're ideology is more inline with SCEI's
Not really. GS3 will do both T&L and rendering.

I doubt that: you should be able to do T&L wherever you want :)

than Microsoft, whose DirectX will soon be run on hardware with a unified shader structure.
I doubt this is true for the sake of backward compatibility(How will this unified shader be able to handle older codes? Rapidly switch context between VS and PS?), not to mention that it hurts performance. It is better to separate VS and PS than unify them.

Rapidly switching context might be one solution ;)
 
Dead, considering you're not an expert by any stretch of the word in the field of CMOS engineering (and hence should keep your mouth shut when it comes to matters such as wether eDRAM limits clock speed or not), there's nothing that prevents an IC from having two or more differently clocked domains, one of those which may or may not be pixel rendering pipes + eDRAM, and the other vertex shaders for example.

Anyway, with a unified shader structure, such a distinction wouldn't exist to begin with, they'd all be just "shaders", plain and simple.

*G*
 
Re: ...

DeadmeatGA said:
Of course this is a given. Guess you didn't know that.

Dead, for someone who's learned everything he knows about process tech from the back of cornflakes packages you do seem sure of yourself... :rolleyes:

Don't state things as facts when you do not know wether you are right or not. Assumptions, no matter how firmly you believe in them, do not count you know...!

Come back and lecture me once you've actually STUDIED the field. Thank you.


*G*
 
...

Dead, for someone who's learned everything he knows about process tech from the back of cornflakes packages you do seem sure of yourself...
Yap. I don't see any Ghz eDRAM product on the market, other than Power4.

Don't state things as facts when you do not know wether you are right or not.
Of course I know them as facts. Just look around, even Sony's own eDRAM parts run at half-clock speed compared to all logic parts.

Come back and lecture me once you've actually STUDIED the field. Thank you.
Could you care to lecture me on the switching speed of logic transistor Vs eDRAM transistor? Thank you in case you are able to lecture me.
 
Re: ...

DeadmeatGA said:
Yap. I don't see any Ghz eDRAM product on the market, other than Power4.

Word to the wise: Don't try to say something is impossible and then give an example of how that specific company has already done it. ;)

Of course I know them as facts.

You seem to know alot as fact, huh?


Also, what you're advocating (eg. breaking up computational resources) is very much what Sony is doing. A Vertex Shader as we now (and will) conceptualize it shares a deep commonality with the concept of a VU - a concept turned into praxis on the PS2 which will only be furthered in the PS3/Cell. A "Pixel/Fragment Shader" isn't that too different fundimentally as we'll see soon enough (perhaps come DX10).

So, by stating that you want to break up the Shaders - it seems like a 'weak' subset of what STI is doing with Cell. Basically, you have a large and flexible pool of Vector resources spread out over two chips (or is that N>2 ICs with Cell computing?!?), in which you (at this point) don't appear to have any arbitrary restrictions on where the shader programs are run. Thus, it's very likely that some developer could run a dual shader scheme as you're advocating where a form of fragment shading is run on the "graphics IC" with vertex processing happening on the Broadband Engine.
 
How much bandwidth does a 2GHz Vertex Processor with 8 Vertex Shaders need? How many polys will you be able to transform and light at that speed theoretically?

How many pixels can be shaded with a 1 GHz Pixel Processor with 8 pixel shaders? How many pixels are you planning to display and are you proposing that eDRAM of 5.3MB fits the whole frame buffer or other data as well?

How are you going to then unify the pixel and vertex data which will be meaningless at this stage? Across what kind of bus and how much bandwidth will it need?

Will external bandwidth be enough to fill 2GHz * 8 VS and will 1GHz eDRAM (of only 5.3MB you specify) be enough to fill 1 GHz * 8 PS continuously so we get good efficiency?

I don't know but it seems like a bad design from what you mention. Just number games.
 
jvd said:
Stop the pissing contest or this thread is closed.

How is this a pissing contest? DM posted trash and we're responding in logical, yet inquisitive manners. If there is a problem, how about you take out the source?
 
Vince said:
jvd said:
Stop the pissing contest or this thread is closed.

How is this a pissing contest? DM posted trash and we're responding in logical, yet inquisitive manners. If there is a problem, how about you take out the source?
Point taken. This thread is the problem. Its now locked
 
Status
Not open for further replies.
Back
Top