RSX = Stream Processor!?!

leechan25 said:
I think a multicore GPU is needed because the task because asking of this GPU will be changed from traditional GPU. There never been a GPU that could preform GI, RT, and other task done by a farm of CPU's. Sony is asking this of it GPU and Cell. I mean look at the PPU, gpu's could handle it tasks with no problems before. However, the standard has change and the needs are greater so hardware designers created the PPU. more is needed from the GPU that's going to create realtime CG games that look like movies.


I'm going to address this multi-core GPU thing head on because I think a lot of people are unclear on the whole difference between GPU and CPU architectures, and leechan this is not specific to you.

CPU's exist in a world where right now, parralelization of code (ie threading) is not all that common, and in the instances it *does* exist, there are diminishing returns as you increase the number of cores.

Ok - so why go multi-core then, right? Well, because we've kind of hit a wall of sorts in terms of processor speed. But we all know this and that whole story, so I'm going to skip over it.

Anyway, your typical multi-core CPU is two identical cores for the purpose of increasing (doubling) compute power in a reasonable fashion. Obviously it costs you roughly two times the die area of a single core, and there are different ways to implement it; a la Athlon vs Smithfield. Also cache can be divied up in different ways, but we're not going there either.

So let's go to GPU's. Instead of a situation where threading is difficult, on the contrary you have a situation where it's inherently crazy parralel in nature. Such that, cores have been 'multi' core for years. What does a dual core NV43 look like? Well it looks a lot like an NV40 to me (sure sure ROP and clockspeed differences, bare with me people). Want to double the performance of a GPU? Why add another core when doubling the pixel pipes, vertex shaders, ROPs, etc.. results in the exact same thing? Only better in many cases. In fact to double those components you're actually adding many cores to begin with. Now with GPU's how much benefit you derive from those extra 'cores' will depend on memory bandwidth and a whole bunch of other junk we won't get into here, but the simple matter is when people generally hear talk of multi-core GPU's, seriously the instinctive reaction is either 'why?' or 'we already have that.'

The only reasons one would actually do a true dual-core GPU would be if the targeted performance goal requied such a massive transistor investment, that yields on the resultant die size would simply be infeasible. Then what you would be left with would instead be two smaller GPU's in a sort of on-package super SLI. I don't know what to say to that, except that I know RSX in PS3 does *not* have that sort of transistor budget allocated to it.

If one wants to discuss the merits and benefits and reasons to go dual-core on the GPU, that discussion is happening right now in the G71 speculation thread:

(which has turned into G80 speculation)

http://www.beyond3d.com/forum/showthread.php?t=26270&page=38
 
Last edited by a moderator:
leechan25 said:
I think a multicore GPU is needed because the task because asking of this GPU will be changed from traditional GPU. There never been a GPU that could preform GI, RT, and other task done by a farm of CPU's. Sony is asking this of it GPU and Cell. I mean look at the PPU, gpu's could handle it tasks with no problems before. However, the standard has change and the needs are greater so hardware designers created the PPU. more is needed from the GPU that's going to create realtime CG games that look like movies.
I really don't understand why you want to put some SPEs in there. If you have the extra transistors for those SPEs, you have the extra transistors for a monster of a G7x based GPU, which would offer you a far better performance boost and require far less modification.

Also what's all this talk about ray-tracing, CPU farms and GPUs that are "going to create realtime CG games that look like movies"? That's just PR talk. They like to talk big but what we're going to see in next gen consoles it's not THAT far beyond what we're seeing right now in some PC and X360 titles. And that's pretty damn good if you take a look at how games looked just a few years ago. That coupled with some witty programming and great art assets can make some next gen titles look absolutely breathtaking to be honest.

If you want to know what the RSX is, you just have to look at the E3 slides and add Barbarian's comments into the equation and that's pretty much it. I understand that the lack of final info from the horse's mouth can be frustrating, that it can make people fill that hole whith whatever speculation crosses their minds and that the longer this speculation period lasts the wilder the speculation will be. I understand that a really exotic G7x-SPE-Visualizer solution can be far more entertaining as speculation material than a G7x 550Mhz GPU, but that doesn't make it real or even better performance wise. People are talking as if a G7x 550Mhz Nvidia GPU were something subpar when in reality it's pretty cutting edge technology, costly technology. Whatever improvements RSX could have over that are just extra ice on the cake IMO.
 
Last edited by a moderator:
Rsx

32 fragment pipelines, 4 disabled... 12 vertex pipelines, 4 disabled... 24 ROP's, 4 disabled... 4 fully functioning unified shader alu's attatched to 4mb EDRAM. ;)
 
!eVo!-X Ant UK said:
Does any body have a rough figure of how many transister pure video etc..etc.. take up in G70??

PureVideo is ~20million according to anand. But I'd wager that it's going to stay in some fashion, after hearing that RSX takes the image processing responsibilities for Blu-Ray.
 
Mmmkay said:
PureVideo is ~20million according to anand. But I'd wager that it's going to stay in some fashion, after hearing that RSX takes the image processing responsibilities for Blu-Ray.
Why would you want PureVideo for this? Can't Sony do whatever custom processing they want (I'm sure they want something other than nVidia's PC GPU image procesing) via shaders?
 
Shifty Geezer said:
Why would you want PureVideo for this? Can't Sony do whatever custom processing they want (I'm sure they want something other than nVidia's PC GPU image procesing) via shaders?

'in some fashion' - as in maybe not PureVideo itself, but I feel we can't simply discard that 20 million budget for non video processing applications [which I think was !eVo!-X Ant UK's intention]. But yeah, we're all shooting in the dark right now as to how it's implemented in RSX. Hints of RSX Audio and Blu-Ray image processing suggests to me a 'PureVideo' esque functionality in silicon.
 
Mmmkay said:


Well, I know I missed that! More questions than answers there... but hopefully those will be addressed. Still I don't see why PureVideo would be kept - I don't see what there is that can be done there on those transistors that can't be done on Cell itself.

But RSX has become a little more mysterious in the last week, so I'm open to whatever. Hopefully the reasons design choices were made will all be clear in the end, and 'ease' of porting to PS3 will not play the key role.
 
MBDF said:
32 fragment pipelines, 4 disabled... 12 vertex pipelines, 4 disabled... 24 ROP's, 4 disabled... 4 fully functioning unified shader alu's attatched to 4mb EDRAM. ;)

Ha! That certainly would be unique.

Speculation on your part I assume.
 
I think he's taking a supposed G71, removing four of everything for Kutaragi's 'redundancy,' and putting in some GS B/C. Tell me if I'm wrong there though MBDF.
 
xbdestroya said:
I think he's taking a supposed G71, removing four of everything for Kutaragi's 'redundancy,' and putting in some GS B/C. Tell me if I'm wrong there though MBDF.

That's what I said in my post....he just copied the part I said about incorporating the GS in there with the 4 MB :LOL:
 
ROG27 said:
That's what I said in my post....he just copied the part I said about incorporating the GS in there with the 4 MB :LOL:

Ok gotcha, missed your sarcasm in the original reply. :cool:
 
Shifty Geezer said:
Why would you want PureVideo for this? Can't Sony do whatever custom processing they want (I'm sure they want something other than nVidia's PC GPU image procesing) via shaders?

Well if Sony wants video chat on a second monitor, you want to make sure you video stuff has the least impact on the CPU as possible, as it most likely will be used for a game on the other monitor. Having the CPU handle video entirely may cause the game to drop a lot of frames.
 
Edge said:
Well if Sony wants video chat on a second monitor, you want to make sure you video stuff has the least impact on the CPU as possible, as it most likely will be used for a game on the other monitor. Having the CPU handle video entirely may cause the game to drop a lot of frames.


:???: Cause it doesn't risk to slow down the GPU instead? The "second video processing" would have to be processed somewhere, moving it from CPU to GPU would free the CPU but might slow down the GPU instead, so the game on you first monitor would still risk to be slowed down.

Having said that, i really don't think the scenario you mentioned (game on one monitor and other things not related to the game itseld on the second) will happen.
 
london-boy said:
:???: Cause it doesn't risk to slow down the GPU instead? The "second video processing" would have to be processed somewhere, moving it from CPU to GPU would free the CPU but might slow down the GPU instead, so the game on you first monitor would still risk to be slowed down.

Having said that, i really don't think the scenario you mentioned (game on one monitor and other things not related to the game itseld on the second) will happen.

Not related to the game? Video chat with the person you are currently playing against in the game.

Slow the GPU? Purevideo is a SEPERATE and UNIQUE processor on the GPU, but what you are suggesting would use processors on the CPU that may very well be used by the game already.
 
xbdestroya said:
I think he's taking a supposed G71, removing four of everything for Kutaragi's 'redundancy,' and putting in some GS B/C. Tell me if I'm wrong there though MBDF.

Pretty much

Somehow 4mb of EDRAM attached to four unified shaders sounded spiffy, shaders that could take up the slack from the normal vertex or fragment shaders to keep things moving along... sort of a half way unified apporach.
 
Here's an interesting tidbit - a DX1 texture is always guaranteed to fit in RSX's texture cache. How many megs would that be?
 
Back
Top