Look at this Google-cached (pulled down) PlayStation 3 page

Re: ...

DeadmeatGA said:
You can look at the bandwidth numbers of CELL as the best indicator of its real world performance; 25 GB/s for Yellowstone bandwidth and 30~40 GB/s peak throughput for Redwood interface between CPU and GPU is rather restrictive. What's so shocking about PSX3 is that SCEI seemed to have repeated all the major design errors of PSX2 that crippled it; the memory bandwidth is fairly low for a machine of this FLOPS rating, vertices still travel between CPU and GPU over a slow bus, the system memory is on the CPU side so textures still have to travel over the redwood bus. A PSX2 deja vue.


I MEAN, JUST LOOK AT THIS......


Fafalada, someone who actually develops on PS2, and Archie4oz, a former Square employee, both agreed that the bandwidth of PS2 is not really what cripples its performance the most.
in fact they said that given the choice, the first thing they would improve would be the amount of memory, both system memory and the eDRAM in the GS, since improving the bandwidth in the EE-GS bus wouldnt really do much for performance as much as those 2 would.
also they complained about the lack of nice blending modes on the GS. oh, and the amount of cache memory in the EE (those little 16k here and 16k there.. those ones)
As u can see, bandwidth is not the main issue here :rolleyes:
Of course, u still wont be convinced even after they say it themselves, so i don't know why i bother...
Not really PS2 deja vu....

oh and it's ps2 not psx2 and deja vu, not deja vue....
 
London-boy, what is actually funnier is that even bandwidth-wise what seem now to be PlayStation 3 specs look much better compared to PlayStation 2...

In addition to that they answered to developers ( which complained that there was not enough caches and local memories to allieviate latency issues when the R95K core had to access main RAM over and over ) by putting more local memories ( each APU has 128 KB of Local Storage made with SRAM ) and more Registers ( each APU has 128x128 bits GPRs ) and then we have more e-DRAM...
 
maybe we should use shorter sentences... like with Chap...

surely PS3 will come out with some flaws or something lacking here and there, but so did every other console before it and so will every console that will ever be released.

therefore fanbois will always have something to complain about.

i guess some people just dont know what "compromise" means.

i SERIOUSLY think that with PS2 Sony came up with a pretty neat piece of hardware given the time it was released and given the performance it has. of course u have to draw the line somewhere and that is what Sony's internal teams decided to cut given the costs and will decide to do with ps3.

to be honest i'm sure that whatever happens i'll be happy with PS3 since i'm not raising my expectations TOO high...
 
That was an excellent post and a wonderfully refreshing breeze of sanity in all of this! Good enough to go into a signature block, if someone was looking for a "saying"...
 
Wait, Deadmeat must be joking.

We just had an intelligent discussion (eg. before he popped up again) about the bandwith and it was decided by the people here more knowledgable than I that due to shifting from bandwith intensive to computational intensive tasks in todays and tomorrow's programs that even the 25.5GBs is adequate.

I disagreed with some ot the things said in that conversation, but made it clear that the fundimental shift is something that is fact. Thus, for him to say that bandwith is going to be the priciple indicator of preformance is wrong, so very 2000-ish. :)
 
I have no problem discussing about compromises made by system designers in a certain architecture and how some of them reach the level of what we call a flaw in said design...

I guess there is a difference between "oh look at that, that might be a bottleneck and let's wonder how they could have circumvented it with the technology those engineers did have available and let's see how today that problem could have been solved" and "oh phew... 4 MB of VRAM... this is teh SUCK!!!! Kutaragi what an evil madman ( edit: which he is not ;) )"...
 
...

To London-Boy

Deadmeat, the fact that u actually believe that PS2 and PS3 are the job of one single man makes all your posts lose credibility.
Quote me where I professed such belief.

Fafalada, someone who actually develops on PS2, and Archie4oz, a former Square employee, both agreed that the bandwidth of PS2 is not really what cripples its performance the most.
Bandwidth problem will be made more obvious if other bigger design flaws didn't overshadow it.

in fact they said that given the choice, the first thing they would improve would be the amount of memory, both system memory and the eDRAM in the GS, since improving the bandwidth in the EE-GS bus wouldnt really do much for performance as much as those 2 would.
So you weren't around while I was ripping the GS apart back in 1999.

Not really PS2 deja vu....
Surely it is.

surely PS3 will come out with some flaws or something lacking here and there, but so did every other console before it and so will every console that will ever be released.
Actually some consoles are engineered to improve upon the design flaws and design oversights of others.


To Panajev

You forget about the Local Storage in each APU which lead a combined 2 MB of extra memory on the GPU and 4 MB on the CPU plus that diagram doesn't take into account e-DRAM on the CPU side.
The problem is that local RAM is only slightly more efficient than a comparable cache implementation, the only advantage of local RAM being being somewhat simplier to implement and not consuming any write-back bandwidth. The fact remains that all data and code still reside within main memory and must be brought into LS at every frame.

GPU will do texture decompression ( no, not just 8 bits CLUT ) and with support of HOS/subdivision surfaces they memory bandwidth needed to load vertex data on will be lower... ( at least for the external RAM to CPU link )...
What will be the eDRAM size of rasterizer??? 32 MB??? Is that enough for next generation quality rendering? Why must SCEI repeat the very mistake the developers loudly criticized in the first place???

London-boy, what is actually funnier is that even bandwidth-wise what seem now to be PlayStation 3 specs look much better compared to PlayStation 2...
PSX2 : 2.4 GB for 5 GFLOPS(No, I don't count FPU and that VU1 environmental FPU}
PSX3 : 25 GB for 1000 GFLOPS.

The bandwidth has improved 10 fold, but the FLOP rating is supposed to increase 200 fold. Of course things are looking really bad.

randycat99

...if you really need full screen.
The discussion was originally about use of software processing to enhance the image quality of low-resolution video souce on a higher resolution display.

Honestly, viewing 320x240 fullscreen on a 1280x960 display and expecting quality is pretty retarded.
This happens more freqeuently than you would expect, NTSC signals benefit greatly from interpolation on HDTV sets.

and that's what you were shooting for in your claim- "film level" quality/
Current HDTV standard tops out at 1920x1080i resolution. While very good, this maximum HTDV resolution is still substentially below that of film resolution. When 5000x2000 resolution screens become widely available in 20 years, interpolation is the only way to improve "obsolete" HDTV images.

This is the sort of thing you see when the "liberals" and the "conservatives" go at it in some political discussion.
So would you call yourself a conservative while labeling me a liberal??? A bad analogy, because liberals tend to fix up social and economic problems while conservatives screw things up. And I say the vast majority of people in technology sectors would be classified as liberals..

To marconelly!

So what do you say about the rumor that Sony plans to not only deliver standard devkits, but to deliver (to any interested 3rd party) a template engine for any given genre (FPS, platformer) Templates would be made in advance by their top developers, say Naughty Dog would write engine for 3rd person games, etc...
Good for PSX3 developers if true. But 3rd party developers will not be very happy if Sony shippped obsolete code to them while keeping the latest engines for inhouse and 1st party games. Still, this does not resolve the fundamental issue of PSX3 development complexity. And games will be more and more look alike from each other, like all those Quake engine powered FPS of past.

To vince

it was decided by the people here more knowledgable than I
Why depend on others to make judgments for you???

that due to shifting from bandwith intensive to computational intensive tasks in todays and tomorrow's programs that even the 25.5GBs is adequate.
Presuming an average data type size of 4 byte, you are accessing 6.3 billion operands from main memory per second.(The real world number is half that) Divide 1 TFLOPS by this number and you get a operation:memory access ratio of close to 160:1. 160 operations per each memory access??? Something is very unbalanced here.
 
Deadmeat, let's not play games...

LS might not be as efficient as a cache, but still does some good for the processor and you also forgot to mention that now the APU have 4x more regusters than what the VUs have.

These two things take pressure off external memory... they allow the CPU to work more on its own...

Also the CPU should have e-DRAM too... they clearly mention in the patent that the memory bus is a combined 1,024 bits data-path ( using a cross-bar switch and several DMACs and Bank Controllers )...

Yellowstone comes with a combined 64 bits data-path.

Let's also mention that Yellowstone should prove to be more efficient than Direct RDRAM: things such as all the data busses being bi-directional should allow for quite a bit lowered latency issues... and to that you add the fact that addresses are not multi-plexed with data in Yellowstone.

16-32 MB of e-DRAM on the CPU, 32-64 MB of e-DRAM on the GPU and then external Yellowstone DRAM and the internal SRAM based Local Storages...

You cannot compare things and forget the e-DRAM...

What will be the eDRAM size of rasterizer??? 32 MB??? Is that enough for next generation quality rendering? Why must SCEI repeat the very mistake the developers loudly criticized in the first place???

Procedural texturing, advanced Texture Compression ( I think we can safely de-compress from APU's LS schemes such as S3TC, VQ [they take only ~4 passes on the current GS after-all and that is not like a flexible math power-house ), Virtual Texturing... all added to 8x the e-DRAM: I think they should be all-set :)

Something is very unbalanced here.

Yes, you are forgetting about e-DRAM and you are forgetting that somethimes we do more than 1 single FP ops per data brought in from memory...

Edit: as Fafalada said, we do not transfer and process the whole content of the external RAM every frame...
 
I think the template engine idea and High Level Shading Language and High Level Libraries will help developers... some will not push them much, some others, like Konami, will push them as far as they can and would probably have their own custom engines...

BTW, you should download the last speech Jason Rubin made at last GDC, he was talking about how things are going in such a way that spending much more money to achieve a technically much better engine is starting to pay off much less and in the next generation the difference between optimized engines made by grat programmers and engines based on things like Renderware ( for next generation consoles ) and simialr middle-wares will be smaller and smaller...

It was big in in the PSOne generation, it is smaller in the PlayStation 2's generation and it will be even smaller in the PlayStation 3's generation...
 
Bandwidth problem will be made more obvious if other bigger design flaws didn't overshadow it.
So... If PS2 had more memory, bandwidth would be a problem? Well, too bad the whole thing was planned in the advance given time and money constraints, so it seems like the amount of memory they could put in doesn't overstrain the bandwidth limit... so why complain about the bandwidth then, if it can't become a problem in the actual use... and why complain about the memory when it was the amount they could afford to put given the console's price tag?

But 3rd party developers will not be very happy if Sony shippped obsolete code to them while keeping the latest engines for inhouse and 1st party games.
Well, I'd assume that it would be getting incremental updates, just like devkits are getting today.

And games will be more and more look alike from each other, like all those Quake engine powered FPS of past.
I very much doubt it, considering the high level shader support, etc. It's the same as worrying that every picture rendered by the same rendering software (say, Maya) will look alike, when it's actually all but, and is completely dependant on artist's approach and vision.

Why depend on others to make judgments for you???
Because if I, say, had my own music band and had to make a video for one of our songs, and if I never shoot a video in my life, wouldn't it be smarter to let people who have expert knowledge in that kind of stuff make it for me?

One question though... if you said yourself that you don't care about hardware power that much anymore, why do you obsess yourself with possible flaws in the future (and existing) hardware? Do you think those flaws (bandwidth and whatnot) will make it less powerfull than it chould ideally be? But then again, you don't care... I don't get it. Some kind of devkit (and as I said possibly even template engines) will be given to developers, so they too won't care if the machine could be better optimized than it is. It's not like any other console released at the same time will make bigger splash hardware-wise so that developers will be complain about the 'underpowered' hardware.

At the time of it's release, in it's better looking games, PS2 displayed very nice graphics (not to mention games that came after!), Whatever 'flaws' hardware had, other hardware of it's time didn't allowed for graphics that were any better. That makes me really not care about any possible flaws of PS3 hardware, as I know they will make it so it shows jump in visuals to whatever is available in (PC) games at the time of it's release.
 
marconelly! said:
One question though... if you said yourself that you don't care about hardware power that much anymore, why do you obsess yourself with possible flaws in the future (and existing) hardware? Do you think those flaws (badwidth and whatnot) will make it less powerfull than it chould ideally be?

The death-blow... but The Beat Goes On...
 
Re: ...

DeadmeatGA said:
The discussion was originally about use of software processing to enhance the image quality of low-resolution video souce on a higher resolution display.

Put simply, you chose the wrong terminology for your claim then. 16x AA would be used to convert a very high resolution rendering to a lower resolution presentation for best quality and to soften any "artifacts". What you have suggested is something we all knew already- upscale something, you're going to need to blend it some way just to look presentable. It certainly doesn't look "better" than the smaller original (just larger), and it certainly doesn't magically assume "film level" quality. It's a stopgap measure. If you want "film level" quality, then you plainly need to start with a high resolution source and show it on an equally high resolution presentation device.
 
The problem is that local RAM is only slightly more efficient than a comparable cache implementation, the only advantage of local RAM being being somewhat simplier to implement and not consuming any write-back bandwidth. The fact remains that all data and code still reside within main memory and must be brought into LS at every frame.


while you raise some valid concerns, would we better off witout the above ram? why is it there? are the engineers at Sony really that retarded that they'd waste die space on redundant ram.

thanks.
 
Deadmeat said:
And games will be more and more look alike from each other, like all those Quake engine powered FPS of past.
Majority of games on present consoles is using middleware like UnrealWarfare, Renderware etc. and except XBox none of them suffers from "too alike" look. (and XBox only because of the 1001 PC port syndrome).

Presuming an average data type size of 4 byte, you are accessing 6.3 billion operands from main memory per second.(The real world number is half that) Divide 1 TFLOPS by this number and you get a operation:memory access ratio of close to 160:1. 160 operations per each memory access??? Something is very unbalanced here.
Thank you DM :D
Assuming a 512mbytes(most agree this is reasonable nexgen estimate) machine with INFINITE bandwith, and assuming your own even more absurd notion that we process entire freaking memory each frame at 60fps, the theoretically most optimal ratio of your equation "operation : memory"(O:M) is 130:1. Such a horribly bandwith constrained machine, isn't it.
But wait, using a semi-realistic use of memory instead (eg. maybe 25% of it per frame) the ratio is closer to 500:1 - man, it really sucks to be so terribly bandwith constrained with infinite bandwith.
8) 8) 8)

The bandwidth has improved 10 fold, but the FLOP rating is supposed to increase 200 fold. Of course things are looking really bad.
And of course it looks bad. Even with infinite bandwith we can't possible feed those awesome flops, and the morons at Sony are actually considering less then that...
Oh ye gods... I just remembered how terrible the situation is for GameCube too... it only increased bandwith like 5x and FLOP rating jumped over 50x... let's all take a moment of silence to mourn for GC developers :'(

Bandwidth problem will be made more obvious if other bigger design flaws didn't overshadow it.
No no no...
Bandwith is clearly the bigger problem. On PS2 the O:M ratio is 14:1 and drops to awesome 12:1 when we use the infinite bandwith.
That's like huge 15% right there :oops: :oops: :oops: :oops:
 
This is amusing...

Bandwidth problem will be made more obvious if other bigger design flaws didn't overshadow it.

Deadmeat: "PlayStation 2 is terribly bandwidth constrained"

Beyond3D: "No, even if over-all PlayStation 2 is fairly well designed and not too unbalanced, there ore other areas you would want to work first: things like main memory size, and R5900 caches which are smaller then optimal and cause worse than ideal load-use latency penalties for the R5900 core ( which lowers integer performance )."

Deadmeat: "Well bandwith would be the problem if ther weren't worse things over-shadowing it..."

Beyond3D: "So, if other parts of PlayStation 2 were pushed to the point of making bandwidth insufficient and thus a problem... then bandwidth would be a problem ?!?"

Deadmeat: "Yes"

Beyond3D: :rolleyes:
 
Panajev2001a said:
This is amusing...

Bandwidth problem will be made more obvious if other bigger design flaws didn't overshadow it.

Deadmeat: "PlayStation 2 is terribly bandwidth constrained"

Beyond3D: "No, even if over-all PlayStation 2 is fairly well designed and not too unbalanced, there ore other areas you would want to work first: things like main memory size, and R5900 caches which are smaller then optimal and cause worse than ideal load-use latency penalties for the R5900 core ( which lowers integer performance )."

Deadmeat: "Well bandwith would be the problem if ther weren't worse things over-shadowing it..."

Beyond3D: "so, if other parts of PlayStation 2 were pushed to the point of making bandwidth insufficient and thus a problem... then bandwidth would be a problem ?!?"

Deadmeat: "Yes"

Beyond3D: :rolleyes:

BWAA HAA HAA HAA. :LOL: Yeah, I noticed that one too.

Deadmeat sure is a great backpedaller. Now, if only he could pedal BACK WHERE HE CAME FROM, it would be even better. ;)

*G*
 
...

To Randycat99

Put simply, you chose the wrong terminology for your claim then.
I told you that 16x AA thing was a joke meant for hardware junkies shortly after. I chose the term "Frame Interpolation" thereafter.

It certainly doesn't look "better" than the smaller original (just larger),
It certainly looks better in examples I posted. This is what they do at NASA and NSA to enhance satellite pics.

and it certainly doesn't magically assume "film level" quality.
HDTV frame is not film quality. You can make it approach film quality through resolution doubling and careful interpolation.

It's a stopgap measure. If you want "film level" quality, then you plainly need to start with a high resolution source and show it on an equally high resolution presentation device.
You have no choice since we are likely stuck with HDTV transmission for another 50 years.

To Faf

Majority of games on present consoles is using middleware like UnrealWarfare, Renderware etc.
And you have to resort to custom coding if you want your games to be graphically outstanding, which is expensive. Tell me which platform will cost more to build game-specific engines, PSX3 or Xbox2??? How much more???

the theoretically most optimal ratio of your equation "operation : memory"(O:M) is 130:1.
Show me how you arrived at your calculation.

Such a horribly bandwith constrained machine, isn't it.
Why restate the obvious.

But wait, using a semi-realistic use of memory instead (eg. maybe 25% of it per frame) the ratio is closer to 500:1
Once again, show me how you got that ratio.

man, it really sucks to be so terribly bandwith constrained with infinite bandwith.
Looking forward to Faf Racer2 on PSX3 to prove your word; BTW, what was the title of Faf Racer1 again??? I have not followed the status of your racer for a while.

Bandwith is clearly the bigger problem. On PS2 the O:M ratio is 14:1 and drops to awesome 12:1 when we use the infinite bandwith.
So how do you go from 14:1 to 500:1 ratio in one generation?? Like everything is in voxels or bezier curve in PSX3 generation and need shitload of calculations to render and lit???

To Vince

why do you obsess yourself with possible flaws in the future (and existing) hardware?
I have my pitties for poor PSX3 developers.
 
hey there Deadmeat

And you have to resort to custom coding if you want your games to be graphically outstanding, which is expensive. Tell me which platform will cost more to build game-specific engines, PSX3 or Xbox2??? How much more???

while Xbox2 is a moot issue (DX10+) are you familiar with optimisations for the current Xbox? while undoubtly more cost effective to optimise I am wondering how much more so.

it's clear that the saving grace/deathknell for PS3 will be the high level API's and compilers (for your oft mentioned auto parrrelism or at least and paradigm which reduces it's importence). if we can assume that it is feasible how difficult will it be to make further opitmisations?

really any debate on the above is useless since we have nothing to go on (unlike the Patents for CELL for instance) while I appreciate that you are effectively saying that it is categorically impossible I cannot shake this feeling that nobody would be 'that' sadistic towards developers.
 
...

while undoubtly more cost effective to optimise I am wondering how much more so.
They should be about equal between PSX2 and Xbox(slightly cheaper).

You will see major cost differences when comparing PSX3 to Xbox2, however.

if we can assume that it is feasible how difficult will it be to make further opitmisations?
Nobody really has the answer, since the field is so young and new. The best study examples are PVM/MPI cluster applications running on Linux boxes. I believe Sony will try to recruit lots of former cluster application developers in a couple of years.

while I appreciate that you are effectively saying that it is categorically impossible I cannot shake this feeling that nobody would be 'that' sadistic towards developers.
Sony believes it could "teach" average developers to write efficient applications for PSX3, much the way average developers were taught to deal with polygon rendering and PSX2 in the 90's.
 
Back
Top