Top developers slam PS3 "broken" allegations

rabidrabbit said:
Hmmm... I'm sure that slide circulated the web already during or after the "Devstation" event, or was that the "GDC".
I know I've seen that before, and I'm pretty positive it was already discussed then here too.

How sure are you?

My memory is telling me the bandwidth slide is "new", at least to B3D. I don't remember it being discussed and a search on CELL reading from the GDDR3 pool at 16MB/s does not have any hits. The general responses in the thread also indicate this was already known. Ditto on any other forum I have been on.

And the vertice information is "new". There are a lot of sites still quoting the "1.1B Vert/s" numbers (Wikipedia even). That has been the only real solid number. As Mintmaster linked and showed, it was the same belief spread around here that RSX/G70 could do 2 vertices per clock. And thus far, outside the 1.1 MVert/s number, there has been no other information from Sony/NV on the vertex shaders. Even as recent as 2 months ago there were the typical speculation that they had been modified in some way due to the lack of information on Sony slides.

You may be able to find the slide here, but so far I have not seen any posts on it and the general responses and discussion here and elsewhere indicated it was pretty fresh to most. To contrast you see the 500M setup number all over the net for Xenos and no one bats and eye.

With all the speculation about CELL<>RSX interaction and the RSX vertex shaders it would be hard to believe such information would have just passed by without any meaningful discussion. It could have, but I cannot find any traces of such. Well, there were a couple comments by me about the 275M setup rate on G71 @ 550MHz, but I think most people ignore more ;)
 
I'm not 100% sure, but when I saw that slide now It looked familiar and I thought "what's the fuss, didn't we already go through this some time ago?"
I'm pretty positive I have seen it, and it being from the "Devstation" event, it is not new and it's very likely the slides from that event already have circulated the web.
It could just be déjà vu of course, or premonition from "DS07"...

Edit: Oh, no! I'm over 4,000 already! And I was planning to hold it back at 3,999 so it would feel better when I finally let it go!
 
Last edited by a moderator:
Where is the gross misinterpretation??

This PS3 dev is glossing over the issue because he is a PS3 dev. The Data is basically accurate and being accurately represented. Just because some Dev spends all the extra man hours to somehow circumvent these obvious issues does not change the actual assessment.

It is simply a case of a developer defending his assets.

Sony simply should not be allowed to pull another PS2 level deception that unfairly killed the Dreamcast. They want to throw around untrue and unrealistic numbers to make everyone think the Xbox is inferior then they need to be just as ready to be exposed for the misrepresenters that they tend to be.

That is how I feel about it anyway.
 
@Mintmaster: Despite all your painting black... i would jus have a look at whats been shown till now, and it does not seem at all that there is any indication that the 360 could push more polys, the opposite is the truth if you have a look and compare current games (often quite low poly/framerate on 360, while PS3 pushing easily 60 fps with loads of detail 6 months ahead from launch).;)
 
Nemo80 said:
@Mintmaster: Despite all your painting black... i would jus have a look at whats been shown till now, and it does not seem at all that there is any indication that the 360 could push more polys, the opposite is the truth if you have a look and compare current games (often quite low poly/framerate on 360, while PS3 pushing easily 60 fps with loads of detail).;)

Please show me the list of PS3 games that are "pushing easily 60 fps with loads of detail" because I haven't seen them.

So far games on both systems have been graphically on par with each other. A couple EA games are said to run better on 360 but that could be due to any number of things. Definitely too soon to make any kind of reasonable conclusions.
 
BaronZed said:
Where is the gross misinterpretation?
Misunderstanding of "local store". The slides meant GDDR-Memory, the Inquirer thought SPE-Localstores. First, the limitation is actually on RSX and not Cell, secondly its no serious limitation, thirdly if it were about SPE-Localstores then Sony wouldnt dare to release Cell in this state.

Try again.
 
BaronZed said:
Where is the gross misinterpretation??

The title of the original Inquirer article says "PS3 hardware slow and broken". Somewhere in the article is also the comment "What a piece of junk" :D
From all indication, it is anything but slow. According to the developers, the identified "flaws" is also a non-issue so far.

Personally, I'll file all these under "RSX cannot do HDR and 4xAA at the same time". The developers know best and may be able to do something about specific limitations (just like Xbox 360 developers have to workaround Xbox 360's limitations). I'm happy with what I see in the games so far.

Just bring in the games and applications and make sure the hardware is reliable. I'm somewhat concerned with PS3's complexity. The rest doesn't really bother me. We already know enough about PS3 to infer that it's indeed very fast.
 
BaronZed said:
Sony simply should not be allowed to pull another PS2 level deception that unfairly killed the Dreamcast.
Maybe if journalists(like the illustrious fellows from Inq.) spent more time researching their articles instead of writting meaningless parabole based on their lack of comprehension of target matter, there'd be less "deception" going around during various console launches.
 
Hardknock said:
So far games on both systems have been graphically on par with each other.
show me a 360 game within a cooee of the graphical quality of the naughtdog game
i cant wait for the shit to hit the fan when we have multiplatform games next year and ones running at 60fps + the others 30fps :)
 
Last edited by a moderator:
Mintmaster said:
It also appears that RSX can only read vertex data at 4GB/s from memory, since the attribute fetch goes through the command processor (see Dave's comment). Then you're constrained even more. If Cell generated a 6 attribute vertex, you're looking at 40M verts per second. Not much of an "assistance" really.

Or to put it another way, if you generate enormous vertices, you might be limited by bandwidth instead of setup and raster costs.

Colour me stunned by that revelation.

What the hell are you putting in 6 full float32 vec4's on every vertex?? a quick mental calculation says I can stuff position, normal, tangent, a UV-coord and a colour into a third of that... you want what, a whole extra 4x4 float32 matrix too?

In a more sane case, you could generate pretty useful vertices, and still be capable of 120Mv/s - which is nearly half of the suggested maximum setup rate, and would give a couple of million poly/s a scene - *even if these bloated vertices were required for every pass*. In the real world, a lot of your rendered geometry possibly doesn't even need shading (shadows or other effect passes), in which case you definitely don't need much bloat in there, and you're left with a lot more headroom for the stuff that does need it.

IME this is a non-issue.
 
Acert93 said:
How sure are you?

My memory is telling me the bandwidth slide is "new", at least to B3D.
yep

never mentioned here before in fact many suggestions to the contrary, that Cell would be dancing freely between memory pools.

...
With all the speculation about CELL<>RSX interaction and the RSX vertex shaders it would be hard to believe such information would have just passed by without any meaningful discussion. It could have, but I cannot find any traces of such. Well, there were a couple comments by me about the 275M setup rate on G71 @ 550MHz, but I think most people ignore more ;)
right again


Not that it means a whole lot in the scheme of things, evidently (according to more knowledgeable posters than me on the technical ramifications). But lots of people saying the opposite of this info for many months and when it comes to light we, here, "yea so?". ;)
 
Nemo80 said:
@Mintmaster: Despite all your painting black...
What have I painted black? Almost everyone agrees that a 7900GTX and X1900XT are more powerful than RSX. They don't have "cell cooperation", and they can put out insane graphics. They only put out 300-400M tris/sec under very limited circumstances also, despite 600M clock speeds.

I'm just saying that this cell-rsx "cooperation", and rsx being able to "control" the SPE's, is not some magic bullet. You're better off just using the vertex shading ability of RSX, and nAo has suggested so as well. It's not worth doing vertex shading with Cell.

I'm also saying that anyone expecting 1.1 billion polys per second is dreaming. Achieving this in game may not even happen for PS4.

will i would jus have a look at whats been shown till now, and it does not seem at all that there is any indication that the 360 could push more polys, the opposite is the truth
LOL, the best expert on these boards could not tell how many polys a game is pushing, let alone your misinformed opinion. Half these games were developed without Xenos until late in the development cycle, too. The artwork began ages ago.
 
RancidLunchmeat said:
Sure.. The misinterpretation is definately worth debunking, but if you go back and re-read this thread, people were saying that the Inq data was horribly incorrect and therefore, no Inq reports should ever be posted here again.

When the reality of the situation is that the Inq actually provided us all with firm data that nobody before had access to.

Not nobody..and I should say that I had no problem with the data and did not dispute it at least. I'm just saying the data was not the problem. Is it worth suffering The Inquirer's filtering to get that data? I don't know. I'd have preferred if a competent site had revealed that info. Leakers take note, I guess ;)
 
MrWibble said:
Or to put it another way, if you generate enormous vertices, you might be limited by bandwidth instead of setup and raster costs.

Colour me stunned by that revelation.

What the hell are you putting in 6 full float32 vec4's on every vertex?? a quick mental calculation says I can stuff position, normal, tangent, a UV-coord and a colour into a third of that... you want what, a whole extra 4x4 float32 matrix too?
People were expecting Cell to take over vertex shading duties. There are games out there that use 10 iterators as input to some pixel shaders. Position, eye vector, 1-4 light vectors, 1-4 halfway vectors, normal, texcoords for base texture, texcoords for 1-4 shadow maps... This is what I was talking about. 6 iterators is not excessive at all. You're talking about raw input to generate the vertex, and that means Cell isn't doing vertex shading. You're quoting me out of context.

And how are you getting all those things into 2 vec4's? Are you using FP16 and RGBA8? Even if those take up less room and BW, you're still taking a clock cycle to load each attribute. You can't unpack 8+ values in one clock cycle even if the total size is 128-bit (though you're right that the 4GB/s limit is alleviated).

In a more sane case, you could generate pretty useful vertices, and still be capable of 120Mv/s - which is nearly half of the suggested maximum setup rate, and would give a couple of million poly/s a scene - *even if these bloated vertices were required for every pass*.
And this would give you about... zero time for pixel shading? If you've seen a post transform pixel:vertex ratio histogram of polygons, you'd see they span many orders of magnitude. 90% of the time, either the pixel/shader pipes or vertex engine is heavily under-used. 120MV/s would likely hold you to under 500K verts in a multipassed scene, and many of those wouldn't be visible either since batch-level culling isn't perfect.

Anyway, this is off topic from the point I was making. Even with Cell's help we won't see 1.1 billion verts/s. Even with Cell's help you're not going to improve much on RSX's vertex shading ability.

It won't matter with today's games, because they are very light on the vertex load. But don't expect a huge 10x leap for the future, that's all.
 
Fafalada said:
Maybe if journalists(like the illustrious fellows from Inq.) spent more time researching their articles instead of writting meaningless parabole based on their lack of comprehension of target matter, there'd be less "deception" going around during various console launches.
I think you meant "hyperbole", not "parabole".
Also, I think you meant "attention-whoring hacks", not "journalists."
 
Mintmaster said:
People were expecting Cell to take over vertex shading duties. There are games out there that use 10 iterators as input to some pixel shaders. Position, eye vector, 1-4 light vectors, 1-4 halfway vectors, normal, texcoords for base texture, texcoords for 1-4 shadow maps... This is what I was talking about. 6 iterators is not excessive at all. You're talking about raw input to generate the vertex, and that means Cell isn't doing vertex shading. You're quoting me out of context.

No, I was responding to a single point being made, which was that bandwidth would be a bottleneck for doing "vertex shading" on Cell. There are many factors which have to be balanced if you're looking to have Cell help out in the rendering, but they certainly can be balanced, which makes it a perfectly reasonable thing to do on this architecture.

It was a self contained point, and I quoted it entirely - it was not out of context. You went on to claim based on some of this, that Cell wouldn't be used to do geometry work except for decompression. I'm going to flat out disagree with that, because in practice we're using it for a lot more.

Certainly Cell isn't going to magically make RSX exceed it's theoretical limit - but it will be able to help make that many polygons look better, because extra processing can be performed. So it *will* be (and already is being, in actual fact) used to aid RSX in rendering, for a lot more than data decompression.

It's not a magical panacea like some people may be expecting, but it's more useful than you're making it out to be.
 
The Inq has a purpose, it is good for one thing alone, the leaks...
All the rest, be it the rumors they spread, their editorials or worse the write ups that comes with their leaked information is garbage.

It's only meant to arouse sensitive folks, who at their turn will spread the link to the info and give The Inq an enormous amount of clicks. And on the internet, for these type of sites, clicks are money...

In other words, what The Inq do with their sensationalistic approach to what they consider "journalism" is make some fuss, bait, kick back and let the internet magic do the rest.
Mintmaster said:
I think the stuff about the setup is pretty relevant, though, and in fact taking into account the things that the programmer objects to (i.e. how they were measured) would only make it more favourable for XB360, as we're discussing in here. Not a flaw, especially compared to PC parts, but polygon rate is certainly not the 1.1 Billion that Sony was flaunting. Lots of people thought it was that fast.
The fact that RSX doesn't encourage developers to use sub pixel polygons is hardly an issue at all for this coming generation.
I'm still uncertain that we'll get to see games with much more than 1 Millions polygons, after culling, per frame, at 60 FPS, so let alone anything much higher than that. On any platform.

The problem with a lot of folks is that they don't understand that there's no need to setup a polygon if it's not visible on the scene... And logically, around half the polygons present on a scene are backfacing and therefore should not be visible, and they should not be setup.

Add to that modern GPUs, _any of them_, are terribly inneficient with small polygons. As in, they really are not suited to deal with a lot of small polygons, from an architectural standpoint, thus there's really no hope for a really high number of polygons per scenes.
For this generation, the only hope for really high polycounts and maybe micro polygons might have been the RS Toshiba was working on... But then again, it would had also meant that other trade offs were made, a lot more problematic ones.

Well, to finish this post on a positive note, there's still some potential methods that can be applied to make a better use of the polygons available, though.
I was discussing about the polygonal issue of next-gen titles with a developer, recently, and he had some really interesting ideas about what to do to alleviate the issue, the idea had its roots on older techniques so it shouldn't too hard to implement. Tuning it and making it to work continuously in an (close to) invisible manner might be a bit trickier, so I don't expect to see this type of techniques being implemented in first gen games.
 
Vysez said:
The fact that RSX doesn't encourage developers to use sub pixel polygons is hardly an issue at all for this coming generation.
Yup, that's why I said it isn't a flaw.

I'm still uncertain that we'll get to see games with much more than 1 Millions polygons, after culling, per frame, at 60 FPS, so let alone anything much higher than that. On any platform.
I don't think we're really even close to that right now, so 1M would be an improvement.

The problem with a lot of folks is that they don't understand that there's no need to setup a polygon if it's not visible on the scene... And logically, around half the polygons present on a scene are backfacing and therefore should not be visible, and they should not be setup.
Well, depending on how you define "setup", that could be part of it. Usually in flow diagrams you have an arrow going from the vertex shader block to the setup block. Where does culling and clipping go? I consider triangle assembly from the vertices to be part of "triangle setup", and you can only cull once you know what your triangle is.

Add to that modern GPUs, _any of them_, are terribly inneficient with small polygons. As in, they really are not suited to deal with a lot of small polygons, from an architectural standpoint, thus there's really no hope for a really high number of polygons per scenes.
A lot of that is due to the setup, I think. High end video cards render 4 quads at a time, so even if a triangle is sub quad and doesn't straddle boundaries, the rasterization section could probably handle 4 triangles per clock with a fast enough setup. I know you have pipelines essentially rendering discarded pixels in this scenario, but "terribly inefficient" is overstating the problem a bit. I would only go that far if you're talking about really small triangles with fairly long pixel shaders.
 
Back
Top