What will Sony add to their RSX GPU development?

version said:
f1 game a launchgame... my friend told me any infos..
sony's developer houses get 1080p 40" tvs
Wow, 40" 1080p tvs for a secondary display...what a waste. That would suck playing F1 on your 27" SDTV and then looking over at your $4000 HDTV to see a 2D map of the course.

I mean, you aren't expecting to play MGS4 in 1080p are you?
 
i guess not Version has prove to be be a very ironic person (lol cf p5)

Fox 5 i can explain you how kyro II tile rendering work, i speak about this this because these card prove to be efficient in power/trannies ratio. I'm by far not knowledgeable...
Hence the tile method for fit 720p in the Xbox360 don't seem as friendly, but in fact i'm quite ignorant i should have shut on this point BUT it was a dream lol.
From what i understand you have to calculate vertice before you determine they're unseen and don't have to be filled.
it must be explain in articles of this site can remenber the name Z pass?? occulting??
shortly ignorant lol.
what i know is that the xenos seem good at since it can all the alu needed for this operation due to its unified shader architecture.
 
liolio said:
But why do you want put some spe in the gpu, a spe running at 550mhz should have terrible perfomance against array vertex shader pipeline with the same number of trannies.
Who said the RSX SPE(s) (hypothetically speaking) would run at 550MHz? They would still carry the same clock rate of 3.2GHz on the RSX GPU die.

If Sony has figured out away for them to coexist between the Cell & RSX then I see no problem for them too coexist on a single die (RSX GPU).

The NVIDIA graphic core will still be rated at 550MHz (or 600MHz); the NVIDIA core doesn’t need to be synchronized (core clock) to work with the 3.2GHz SPE(s) on the GPU die. If the RSX has a “graphic setup engine” to handle the pixel shaders, vertex shaders, post processing effects, and so fourth, the graphic setup engine will act like a buffer (sort of speak) when putting all the data back together.

In the end all this is hypothetical talk……………

AAron Spink have already point this

Aren’t we all just assuming the SPE(s) in the Cell are the same ones being used in the RSX Quad (hypothetical) design?

What if there are different stepping designs of the SPE(s) cores that vary depending on the nature of the processor. There’s already talk at IBM making or turning the SPE into full blown DP units for server purpose of course. Just what if…just what if the SPE(s) are specifically tailored or a re-design to fit the needs of the RSX GPU.


But a beautifull things are dreams, anyways i expect the ps3 to become more and more impressive along its lifetime due to cell brute force ;)

Yes!! :D
 
Alpha_Spartan said:
Even if they still have no games? When all you have to show at multiple trade shows are concept footage of games that just started development (or weren't even in development), you could have 5 million consoles all boxed up and ready to go and you'll still have to wait until you have a game library to launch with.

I'm sorry, but you'll have a hell of a time trying to convince me that Spiderman 2 on Blu-Ray is going to push PS3's.

1. Ports and hastily made games. Luigi's Mansion was a GameCube tech demo.

2. It is likely they want costs to go down as well. That blu-ray drive won't be cheap, they may have to sacrifice a little on the gpu.
 
Go on, why not put 8 SPEs on that GPU? And even a PPE? Add some holographic memory too... And if you're at it, there's version's favorite, the flux capacitor!
:devilish: :devilish: :devilish:
 
Last edited by a moderator:
silhouette said:
Let me iterate my questions again:

1- What would you do with these SPEs in this configuration?

2- What kind of effects can you achieve with those SPEs that you can not do with vertex/pixel pipelines?

Well as you know Pixel Shaders (programmable ones) are all the rage at the moment. So if the RSX had SPE cores (hypothetically) on the GPU die, thus allowing the SPE(s) to take care of the vertex shaders duties and so fourth. By doing this…the NVIDIA G70 or G80 based graphic core would be freed-up to add/substitute more transistors by eliminating the VS pipelines; thus allowing more pixel shader pipelines to be added. This particular design would achieve a much higher (raw & programmable) pixel shader performance compared to the current G70 core. Sony/NVIDIA may also eliminate the video chip from the G70 (or G80) core, since the Cell CPU is pretty damn capable of decoding multiple streams of video data.
 
Laa-Yosh said:
Horrible tradeoff, isn't it? They can keep all the framebuffer traffic within the EDRAM die, and they have to sacrifice what, 210 MB/sec, almost an entire percent of that 22GB/sec bandwith for it, the fools...
How easy is it for an engine to support tiling? Xenos 10mb is a walk on the tightrope. KK rule out EDRAM for framebuffer, 4mb textures and hdr cache is better, can Xenos do hdr in the EDRAM?
 
Alpha_Spartan said:
version said:
f1 game a launchgame... my friend told me any infos..
sony's developer houses get 1080p 40" tvs

Wow, 40" 1080p tvs for a secondary display...what a waste. That would suck playing F1 on your 27" SDTV and then looking over at your $4000 HDTV to see a 2D map of the course.

I mean, you aren't expecting to play MGS4 in 1080p are you?
What do you mean exactly? I can't get it as an answer to version's post which has no mention of a secondary display, nor 1080p rendering :p
 
tema said:
KK ruled out eDisplay vram because 1080p needs an expensive die. Xenos must break a frame down into 3 tiles for the small daughter die...

Yes, you're right. KK didn't explicitly rule out eDRAM completely. I'm sorry, I didn't really remember exactly what he had said. Unfortunately, considering the fact that RSX would need to integrate nVidia G7x technology (if not specific implementation), sufficient redundancy, 35 million+ DRAM transistors, in a chip that has a competitive level of "shading power" with Xenos and all fits in a viable die size (for a console), inclusion of eDRAM in RSX still seems uncertain at best. The question is, how much money is Sony willing to lose? One positive thing is that DRAM has a lower defect rate per transistor than logic, so it's still a possibility I suppose (hope) :smile:.

BTW, in the interview where KK talked about eDRAM, the assumption was that RSX needed an eDRAM frame buffer large enough to fit not one, but both, 1080p channel outputs :rolleyes:. What a heinous waste of resources, that would require at least 560 million eDRAM transistors.
 
Alpha_Spartan said:
Damn. Alot of people are going to be disappointed by the PS3 when it's announced.

That's really a fucking shame too since it will be one hell of a system. I really can't stand when people ruin a console for themselves by hyping it to unrealistic levels. A multi-core GPU??? *Deleted by a Mod* please!!! I've heard it all.

Once again I have to point to my sig. Please, I warn you, don't ruin an awesome system by killing it with unrealistic expectations. Even if the PS3 has nothing more than an overclocked G70 it will still rule. I'm confident in that. Like I mentioned earlier, this thing still has to be small, produced in the millions and cost less than $500. Let's not get crazy here.
Hey its not my fault.Its the tekken/motorstorm/killzone videos' fault ok? :p
 
one said:
What do you mean exactly? I can't get it as an answer to version's post which has no mention of a secondary display, nor 1080p rendering :p
You're right. But maybe I made the mistake of dismissing the assumption that Sony devs got 1080p TV's to watch movies on BD.
 
Alpha_Spartan said:
Don't blame it on Sony. Blame it on the 'bois who extrapolate specs based on videos.
Well I was a bit sarcastic there :p

Well the thing is I am not impressed ny what I ve seen on XBOX360 so far technically speaking.It doesnt really feel like next gen to me.And I just hope thats not all we are going to get from next gen.I just hope PS3 wont be more of the same.The videos ay E3 showed concepts of gameplay that felt next generation.I just hope they are feasible
 
Last edited by a moderator:
Tile based deferred rendering like PowerVR doesn't prevent polygon transform work, but I think a little of that can be avoided regardless of rendering scheme by testing with bounding boxes initially.

Tile base deferred rendering is more efficient in several areas of performance. External memory bandwidth and space is saved because the processing takes place in on-chip tile buffers. Drawing/calculating work is conserved because only necessary pixels are processed, which also leads to texture bandwidth savings on invisible surfaces. Another important efficiency in the rendering approach is the optimal memory accesses it's able to do by having data locality and contiguous writes of its pre-ordered information.
 
Lazy8s said:
Tile based deferred rendering like PowerVR doesn't prevent polygon transform work, but I think a little of that can be avoided regardless of rendering scheme by testing with bounding boxes initially.

Tile base deferred rendering is more efficient in several areas of performance. External memory bandwidth and space is saved because the processing takes place in on-chip tile buffers. Drawing/calculating work is conserved because only necessary pixels are processed, which also leads to texture bandwidth savings on invisible surfaces. Another important efficiency in the rendering approach is the optimal memory accesses it's able to do by having data locality and contiguous writes of its pre-ordered information.

Does the transistor budget needed for TBDR increase as the complexity of the chip increases? If not, it would seem like a no-brainer to put it on every chip, especially since it doesn't seem like we'll be getting 512-bit memory buses for a while.
 
Back
Top