Yamauchi on the PS3: "...beginning of a new world..."

randycat99 said:
Let's throw down a value then, shall we? I submit that 1.5-3 GB/s should suffice to handle an AA stream. Yay or nay, anybody?

I would have to say nay .


Anyway i'd have to ask wouldn't there be a latancy from the frame to the screen ?

Would the rsx have to do the set up , culling textureing and hdr then pass the frame to the cell to have aa done and then back to the rsx to be displayed

The question would be is what do we do with the second framebuffer. We'd have 1 frame being worked on by the rsx and one being worked on by the cell correct ? So that means we'd have to transfer the finished rsx frame to the cell to the xdr so the cell could work on it and then have the rsx acess that frame again to display on the screen would we not ?

I can see that eating up alot of bandwidth and depending on the delay perhaps have a discounect with the play between your input and what it shows on screen .

Now i dunno if 'm correct .
 
jvd said:
Oh yea when did u play gt5 ?

Oh thats right you didn't , nice making baseless claims . The sony way . God the trolling has gotten so bad on this forum its not even funny . COme on sonic i know you can do a better job than this

Projecting?

mckmas8808 said:
How can you say they may be too excited when you haven't seen what they have been able to do. We here are very defensive because we don't want to get our hopes up only to be kicked down with dissappointed games. They know what they are creating we don't.

Let me show you what GT5 looks like now. Keep in mind this game is at least one year away and has not even closely seen the RSX or the RSX -> CELL connectivity.

vgt33op.jpg


I know bad quality. This is NOT direct feed.

I agree... it will look better than that... (the video) much much better, and anyone who thinks otherwise should get a check up, because once more cars (with more cars on track than GT4) and the individual tracks get up to the level of the two cars they had on display then we should be seeing some absoloutly phenominal graphics... franky, the above shot could easily be confused for a photo.... it all comes down to the uncanny valley.... wich I'm sure will pull us out once and a while. :smile:
 
MBDF said:
I agree... it will look better than that... (the video) much much better, and anyone who thinks otherwise should get a check up, because once more cars (with more cars on track than GT4) and the individual tracks get up to the level of the two cars they had on display then we should be seeing some absoloutly phenominal graphics... franky, the above shot could easily be confused for a photo.... it all comes down to the uncanny valley.... wich I'm sure will pull us out once and a while. :smile:

That's what I've been trying to say the whole time. I know you guys realize that GT5 will look better than this. They do have over one year development time.
 
jvd said:
I would have to say nay .


Anyway i'd have to ask wouldn't there be a latancy from the frame to the screen ?

Yes there would be latency, but since it would be a single transfer per frame, it would be unnoticable.

Would the rsx have to do the set up , culling textureing and hdr then pass the frame to the cell to have aa done and then back to the rsx to be displayed

Yes, but the 360 works the same way.

Xenos renders the frame sans AA, and then sends it to the EDRAM, where the AA is applied. The final frame (Or tile actually) is then rendered back to the GDDR3 where Xenos picks it back up to be displayed.

The question would be is what do we do with the second framebuffer. We'd have 1 frame being worked on by the rsx and one being worked on by the cell correct ? So that means we'd have to transfer the finished rsx frame to the cell to the xdr so the cell could work on it and then have the rsx acess that frame again to display on the screen would we not ?

That's correct, but again, the 360 works the same way. The only difference between the two is the fact that the PS3 would have to use XDR instead of embedded RAM, and Cell would perform the AA instead of logic within the embedded RAM.

But other than that, the process would be basically the same on both systems.



Don't forget that the original plan was to have Cell do the rendering as well as the CPU workload. It's quite capable of performing just AA with little to no noticable hit to performance.
 
mckmas8808 said:
That's what I've been trying to say the whole time. I know you guys realize that GT5 will look better than this. They do have over one year development time.

Hey, I admitted it. Like I said earlier, there is no reason not to expect GT5 to look equal to PGR3 as a bare minimum. I would honestly be surprised if it didn't look better than PGR3, considering it doesn't have the time limit in development that PGR3 does.
 
Powderkeg said:
Yes, but the 360 works the same way.

Xenos renders the frame sans AA, and then sends it to the EDRAM, where the AA is applied. The final frame (Or tile actually) is then rendered back to the GDDR3 where Xenos picks it back up to be displayed.

Why don't you try talking about something you know something about?

Antialiasing is NOT a postprocess and can not be performed as one, unless you count simple downsampling.
 
Yes there would be latency, but since it would be a single transfer per frame, it would be unnoticable.

errr
http://www.beyond3d.com/articles/xenos/index.php?p=04

the xenos while has two cores on the package is very diffrent from two sperate chips . Its better to think of it as one chip split into 2

ATI's calculations lead to a colour and z bandwidth demand of around 26-134GB/s at 8 pixels with 4x Multi-Sampling AA enabled at High Definition TV resolutions. The lower end of that bandwidth figure is derived from having 4:1 colour and Z compression, however the lossless compression techniques are only optimal when there are no triangle edges intersecting a pixel, but with the presumed high geometry detail within a next generation console titles the opportunities for achieving this compression ratio across the entire frame will be reduced. So, with 256GB/s of bandwidth available in the eDRAM frame buffer there should always be sufficient bandwidth for achieving 8 pixels per clock with 4x Multi-Sampling FSAA enabled and as such this also means that Xenos does not need any lossless compression routines for Z or colour when writing to the eDRAM frame buffer.

The question is where is the ps3 going to find 26 - 134gb/s a second of bandwidth . Its either going to eat up the texture ram or its going to eat up the xdr ram bandwidth just for 4x fsaa .

That's correct, but again, the 360 works the same way. The only difference between the two is the fact that the PS3 would have to use XDR instead of embedded RAM, and Cell would perform the AA instead of logic within the embedded RAM

Right but the edram is 10 megs so the tiles are in 10 meg chunks . What is the ram for the spu 256kbs ? or is it less . That is alot of tiles to stream in . It iwll be a hell of alot more than 3 titles and swaping them back and forth is going to eat up bandwidth. Then of course you have to send the finished product back to the rsx to be displayed.

I'm sure the ps3 can do fsaa and hdr i just don't think it will be worth the trade off in rendering performance. However that is if the rsx can not do fsaa + hdr
 
Powderkeg said:
Hey, I admitted it. Like I said earlier, there is no reason not to expect GT5 to look equal to PGR3 as a bare minimum. I would honestly be surprised if it didn't look better than PGR3, considering it doesn't have the time limit in development that PGR3 does.

thats if the RSX is better than G72 or even if its better then 7800 Ultra


the one thing that has struck me is that why are so many of PS3 top games coming a year after its launch. is it THAT difficult to develop that top games like MGS, GT5, DMC4 cant be release games since development would begin more than a year and a half before the supposed PS3 US launch.
 
pakpassion said:
thats if the RSX is better than G72 or even if its better then 7800 Ultra


the one thing that has struck me is that why are so many of PS3 top games coming a year after its launch. is it THAT difficult to develop that top games like MGS, GT5, DMC4 cant be release games since development would begin more than a year and a half before the supposed PS3 US launch.

These games that you mention have been historically released "when it's done" and is independent of launch. Except maybe for GT. I remembered Sony forcing PD to release GT2 out early for holiday release, and look what happened to it. Basically you were playing the beta version with horrible collision detection. I don't think Sony will make that mistake again and will tell their premier developers to "take their time."

You don't tell Kojima to hurry up with MGS4 in the same way you don't tell Michelangelo to hurry up with the ceiling of the Sistine chapel.
 
pakpassion said:
thats if the RSX is better than G72 or even if its better then 7800 Ultra


the one thing that has struck me is that why are so many of PS3 top games coming a year after its launch. is it THAT difficult to develop that top games like MGS, GT5, DMC4 cant be release games since development would begin more than a year and a half before the supposed PS3 US launch.

One thing I find interesting is that the G70 at 550 mhz is equivilent to a 32 pixel pipe 7800 GTX at stock speeds... now when thought of that way... it's not hard to imagine the RSX, even mostly resembling the G70 being quite the beast.
 
MBDF said:
One thing I find interesting is that the G70 at 550 mhz is equivilent to a 32 pixel pipe 7800 GTX at stock speeds... now when thought of that way... it's not hard to imagine the RSX, even mostly resembling the G70 being quite the beast.

except your not factoring bandwidth .


Anyway we don't know what the rsx will be. It could be a 550mhz g70 with a quad disabled . Or it could be a 550mhz g70 with an extra quad making it even more powerfull .

However we do kow about the bandwidth
 
MBDF said:
One thing I find interesting is that the G70 at 550 mhz is equivilent to a 32 pixel pipe 7800 GTX at stock speeds... now when thought of that way... it's not hard to imagine the RSX, even mostly resembling the G70 being quite the beast.

well then considering the Xbox 360 has more pipes than X1800 XT with better performance with each unified pipe structure at 500 Mhz, it looks like even more of a beast. Also consider that for the Cell to get more than 256 of ram it needs to go through RSX, wasting bandwidth, for RSX to need more than 256 of ram it needs to go through Cell, further waste of bandwidth, then considering that RSX uses the same architecture and technology as G70 which seemingly is rather very poor or cannot do HDR and AA at the same time, the Cell (as someone here said) would need to do AA after the RSX does HDR which would even further waste bandwidth. At the end the performance of both machines will be the same but I believe because of Xenos it will be able to compete graphically with the R580
 
MBDF said:
One thing I find interesting is that the G70 at 550 mhz is equivilent to a 32 pixel pipe 7800 GTX at stock speeds... now when thought of that way... it's not hard to imagine the RSX, even mostly resembling the G70 being quite the beast.

lets look at this logic:

http://www.gamepc.com/labs/view_content.asp?id=xfx7800gtxoc&page=5

this site has the 7800 GTX overclocked from 430 Mhz to 500 Mhz, just 50 Mhz below the RSX with a lower memory core Bandwidth:


Half Life 2 1600x res

Stock: 129.8
OC: 138.1


Battlefield 2 1600x res

Stock: 74.6
OC: 81.5


Fear Demo 1600x res

Stock: 23.1
OC: 25


Far Cry HDR 1600x res

Stock: 42.1
OC: 46.7



add around + 3-4 fps for 50 Mhz more
 
pakpassion said:
Also consider that for the Cell to get more than 256 of ram it needs to go through RSX, wasting bandwidth, for RSX to need more than 256 of ram it needs to go through Cell, further waste of bandwidth

I'm not sure why Cell would need more access than 256 MB of ram.

pakpassion said:
then considering that RSX uses the same architecture and technology as G70 which seemingly is rather very poor or cannot do HDR and AA at the same time, the Cell (as someone here said) would need to do AA after the RSX does HDR which would even further waste bandwidth. At the end the performance of both machines will be the same but I believe because of Xenos it will be able to compete graphically with the R580

I wouldn't classify it as very poor... as you said yourself, you expect it to compete quite evenly with Xenos... as for bandwidth, RSX has yet to have been finalized, so who knows... we could see a jump to 256 bit memory, and/or some other tweaks... either way, I expect the IQ between both machines to be similar.
 
IN that test both the memory and gpu itself were increased .

Finding a test where the gpu only is increased in speed or the ram is increased in speed (or both independent of each other ) would give u a better picture of how much diffrence .

I would expect at 1600x1200 with 4x fsaa and 8x aniso on as your benchmarks have it would be very bandwidth limited and of course even a small 25mhz increase in memory speed (effective 50mhz) would increase the speed

edit

THe ram is running at 675mhz on a 256bit bus giving you almost double the ram bandwidth to this video card than the rsx will have to its gdr ram . I believe you'd haev to infact devote the bandwidth from both the xdr and gddr ram to equal the amounto f bandwidth that this gpu has avalible to it and that would leave no bandwidth for the cpu . SO that is unlikely .
 
I wouldn't classify it as very poor... as you said yourself, you expect it to compete quite evenly with Xenos... as for bandwidth, RSX has yet to have been finalized, so who knows... we could see a jump to 256 bit memory, and/or some other tweaks... either way, I expect the IQ between both machines to be similar.

a 256bit bus or faster ram will only increase the cost of the unit and i don't see them doing it . Of course your now talking with gdr 700mhz double the bandwidth avalible . But as we see from the pc add in cards the 256bit bus scales down in price very slowly and we only see it in the mid highend of the spectrum (With cards being phased out like the 9800pros hitting the 200$ mark) normaly around 300 and up .
 
MBDF said:
I'm not sure why Cell would need more access than 256 MB of ram.



I wouldn't classify it as very poor... as you said yourself, you expect it to compete quite evenly with Xenos... as for bandwidth, RSX has yet to have been finalized, so who knows... we could see a jump to 256 bit memory, and/or some other tweaks... either way, I expect the IQ between both machines to be similar.


It has already been finalised.

kaigai02l.gif
 
well clock speeds can change slightly. I could see another 200-400mhz on the cell and mabye another 50mhz on the gpu. But i don't see ram moving much as even in spring 2006 gdr 700 would be very expensive .
 
jvd said:
a 256bit bus or faster ram will only increase the cost of the unit and i don't see them doing it . Of course your now talking with gdr 700mhz double the bandwidth avalible . But as we see from the pc add in cards the 256bit bus scales down in price very slowly and we only see it in the mid highend of the spectrum (With cards being phased out like the 9800pros hitting the 200$ mark) normaly around 300 and up .

So I'll ask you, do you think the RSX will be bandwidth limited according to it's needs?

Personaly I would like to see a 256-bit bus, however expensive it might be, as with in RPG's I like to level up much more than is required before a boss fight... that's just my style... I'm sure though It will defy my expectations regardless.
 
jvd said:
well clock speeds can change slightly. I could see another 200-400mhz on the cell and mabye another 50mhz on the gpu. But i don't see ram moving much as even in spring 2006 gdr 700 would be very expensive .

Consoles only get downgraded looking at thier history.
 
Back
Top