Top developers slam PS3 "broken" allegations

Ben-Nice

Regular
http://www.gamesindustry.biz/content_page.php?aid=17547

The contentious triangle setup figure, which The Inquirer claims to be 270 million triangles per second, compared to around 500 million per second in the Xbox, came under fire first.

"It's just a pointless measurement," one programmer told us. "Where's the context? How were these numbers measured? There are loads of different ways you can measure tri performance, and just putting up headline figures like that tells you nothing."

"In fact, the PlayStation 2 had better tri performance than the Xbox, on paper," he continued. "Everyone knows that the Xbox was more powerful at running real games, but if you just wanted to fill a screen with 2D, flat colour, unlit triangles, then the PS2 was much better at that, so it looked great in benchmarks. That just shows how meaningless this measurement is - it's really pointless."

However, particular scorn was heaped upon the claim that the Cell is being "hobbled" by slow memory access - based on a Devstation slide which shows Cell having only 16Mb/s read access to "Local Memory", compared to the 10-25Gb/s access figures for other component and memory types in the PS3.

"They've got the wrong end of the stick grasped firmly in both hands," said another source regarding this claim. "I'm not even sure if they're holding the right stick."

Each developer concurred that the slide in question was referring to local memory on the RSX - the graphics memory, in other words, and not the local memory on the Cell processor which The Inquirer claimed was in question.

"I didn't see that slide at Devstation, but all the numbers add up," one coder said, "and it's a total non-issue. You never, ever need to access that memory from the Cell - I can think of some useful debugging things you might do with that access in the testing stage, but that's about it. In fact, on the PS2 you couldn't access that memory from the CPU at all, and it was never really a problem!"

"I can see a couple of reasons why you might want to use it," another developer told us, "but really, they're pretty obscure, and you could probably do them on the RSX anyway, since it's quite flexible. Besides, if you really need to access video memory from the Cell, you can use the RSX to copy it over into main memory really quickly - it's all there on the slide."

"I doubt a single person in the room batted an eyelid when they showed that slide," continued the first source. "It's exactly what we'd expect, and the bits that we actually need to use to make games are perfectly fast."

While dismissing The Inquirer's claims as entirely spurious - and pointing out that even if they were true, they would be flaws so serious that Sony would simply not be able to release the Cell chip in that state - at least one of our sources admitted that PS3 was taking some time to get used to, but perhaps not as much as some parts of the media have suggested.

"I'd say PS3 was a challenge to work on," he said, "but every new platform takes a while to get used to. Put it like this, I worked on early PS2 games, and those were a real nightmare - we're getting code up and running on PS3 much faster than we did last time around."

"Once people start doing really impressive stuff on PS3 and Xbox 360, they're both going to be much the same [in terms of difficulty]," he concluded. "Sony's giving us better tools this time around - they're still not great at communicating and there are some weird holes in their developer support, but they've learned a lot of lessons from PS2."
 
  • Like
Reactions: one
Can we once and for all put a ban on posting Inquirer news. Their last 2 attempts to ruin the PS3 have been so false and misleading you'd think people would just stop going there all together.

That said, some nice comments in there.

"I'd say PS3 was a challenge to work on," he said, "but every new platform takes a while to get used to. Put it like this, I worked on early PS2 games, and those were a real nightmare - we're getting code up and running on PS3 much faster than we did last time around."

"Once people start doing really impressive stuff on PS3 and Xbox 360, they're both going to be much the same [in terms of difficulty]," he concluded. "Sony's giving us better tools this time around - they're still not great at communicating and there are some weird holes in their developer support, but they've learned a lot of lessons from PS2."

Always nice to hear that developing on the PS3 has been easier at the same point they were working on the PS2.
 
Synergy34 said:
Can we once and for all put a ban on posting Inquirer news.

When the thousand monkey sitting in front their thousand typewriters finish Hamlet, we're going the Inq report sg right. :p
 
Think of it this way: Without the Inquirer most posters would still not know the bandwidth and triangle setup rate on the PS3. It appears to me that people use the Inq as stealth leaks to discuss issues, etc... of course they are used for viral PR as well, but when you are a puppet what do you expect?
 
Also all that false news encourages devs to 'speak out', so it serves a purpose in the end. Though it is annoying when other sites also spread the 'news' regardless and I hear friends report that 'the PS3 is looking to be broken and half as powerful as XB360 at much higher price.'
 
Uhh.. Why?

So far as I can tell, nothing was inaccurate about the Inq report except for their own conclusions were formed because of their lack of understanding.

However, the leaked performance measures seem to have been confirmed.

And it is actually interesting, that the Cell can't quickly access the GDDR, and if you need to use that additional 256Mb, then you have to essentially bog down the RSX.

I believe I remember questions being raised about the utility of the PS3 using both GDDR for the RSX and XDR for the Cell. This is also raises questions about the initial discussions that Sony wanted to use 512Mb of XDR but that it was too expensive, and that the addition of the RSX instead of using further cells as a GPU forced a split in the memory, and the result is that instead of having a single pool of 512Mb that both the CPU and GPU can access equally efficently, the PS3 has two pools that can still be accessed by both the CPU and the GPU but at a far slower rate by the CPU.

I don't think it's any big deal, because Sony has let the developers know and as long as they are aware of the situation, then they'll work to the system's strengths and away from its weaknesses, just like with any other.

But I do think the Inq story actually leaked some interesting performance measurements that we weren't previously aware of that do actually lend themselves to some interesting debate/theory over the development choices of Sony with the PS3.
 
Rancid - no one had a problem with the underlying data. It was just the gross misinterpretation on the part of The Inquirer - and that's worth debunking.
 
Titanio said:
Rancid - no one had a problem with the underlying data. It was just the gross misinterpretation on the part of The Inquirer - and that's worth debunking.
I think the stuff about the setup is pretty relevant, though, and in fact taking into account the things that the programmer objects to (i.e. how they were measured) would only make it more favourable for XB360, as we're discussing in here. Not a flaw, especially compared to PC parts, but polygon rate is certainly not the 1.1 Billion that Sony was flaunting. Lots of people thought it was that fast.

But for the other part, like how fast Cell reads and writes across FlexIO, you're absolutely right. Inq was dead wrong in their assertions there. Doesn't impact games at all, really.
 
What does this do to the theory that Cell's SPUs can help out on tasks with RSX? If you want to do that, you have to start copying data back and forth with RSX in order for Cell to see it. In the end, it might be a waste of time.
 
OtakingGX said:
What does this do to the theory that Cell's SPUs can help out on tasks with RSX? If you want to do that, you have to start copying data back and forth with RSX in order for Cell to see it. In the end, it might be a waste of time.

See thats where I got hung up myself... But then it hit me like a tonne of bricks... Just do this sort of thing in XDR memory. As both parts have roughly equivalent access.

Only thing I'd wonder about is if the numbers were a reflection of actual communication read bandwidth, or just bandwidth to the GDDR3 memory. If it were the former I'd wonder if it could effectivly limit things like occlusion querys, etc.
 
Ben-Nice said:
"It's just a pointless measurement," one programmer told us. "Where's the context? How were these numbers measured? There are loads of different ways you can measure tri performance, and just putting up headline figures like that tells you nothing."

I don't understand this claim because there was context. Xbox 360 can setup 500 million polygons with trivial shaders (Xbox 1 level), while the RSX can setup 275 polys with zero shaders. Is that not correct? What is wrong with this measurement?
 
There really was a lot of hyperbole being spread around since last E3 about the potential of RSX and its specs .

Although The Inquirer are meatheads for writing such a 'story' about these specs in the way that they did, it is nice to finally see and discuss (I just read :)) some of the realities of the chip.
 
"In fact, the PlayStation 2 had better tri performance than the Xbox, on paper,"

Weren't the Ps2's results the combination of EE + VU0 + VU1, and Xbox was higher anyway with NV2A + XCPU? I think raw fillrate and memory bandwidth where the only specs on PS2 that were actually higher than the Xbox.

"I didn't see that slide at Devstation, but all the numbers add up," one coder said, "and it's a total non-issue. You never, ever need to access that memory from the Cell - I can think of some useful debugging things you might do with that access in the testing stage, but that's about it. In fact, on the PS2 you couldn't access that memory from the CPU at all, and it was never really a problem!"

Wouldn't it kill Cell + RSX actually cooperating on any graphics if Cell can't write to RSX's ram? I mean, yeah, you can copy out the data to Cell, but how do you get it back to RSX so it can utilize it? Well, I guess RSX can read from main memory too, huh?
 
Fox5 said:
Wouldn't it kill Cell + RSX actually cooperating on any graphics if Cell can't write to RSX's ram? I mean, yeah, you can copy out the data to Cell, but how do you get it back to RSX so it can utilize it? Well, I guess RSX can read from main memory too, huh?

CELL can write to the GDDR3 pool @ ~4GB/s. CELL can also write directly to RSX. And as you said RSX can read from XDR.
 
Hardknock said:
Xbox 360 can setup 500 million polygons with trivial shaders
I think someone misquoted this. XBox 360 can setup 500 million polygons with non-trivial shaders. The documents actually say: "The Xbox 360 GPU can consistently achieve more than four times the Xbox GPU triangle rate with minimal vertex reuse and a complex vertex shader."

while the RSX can setup 275 polys with zero shaders. Is that not correct? What is wrong with this measurement?
With perfect vertex re-use, RSX can handle 32 instruction shaders at 275Mpolys per second, so I wouldn't call that zero shaders. But considering all limitations like attribute reads, yeah, you more of less need a simple vertex shader to reach that peak.
 
Personally i'd like to see peak performance numbers (and the true ones that come out later) thrown out entirely. For every console Sony has launched they have simutaniously launched a rather large PR campeign about how their latest and greatest is a super computer which has proven never the case. So when the source stops, things like the wild claims on the issue made by the Inq stop. But as long as Sony uses unrealistic numbers to promote consoles that for some reason or another other gradually drop by a large percent over the next 6-12 months, people will call them out on it. Not to say MS doesnt do this too, Sony is just far too bold with statements and claims.


I personally think its a little unfair to attack theinq for misconstruing a BS number Sony released to begin with and then changed.
 
Titanio said:
Rancid - no one had a problem with the underlying data. It was just the gross misinterpretation on the part of The Inquirer - and that's worth debunking.

Sure.. The misinterpretation is definately worth debunking, but if you go back and re-read this thread, people were saying that the Inq data was horribly incorrect and therefore, no Inq reports should ever be posted here again.

When the reality of the situation is that the Inq actually provided us all with firm data that nobody before had access to. The fact that Inq is a bunch of morons and they misintrepretated the data, doesn't mean that the data itself isn't important.

This is actually a prime example (IMO) how the Inq is actually a very important source of information.. not the opposite.

The only thing that we need to realize is that the information might be valid, but their intrepreation isn't.
 
OtakingGX said:
What does this do to the theory that Cell's SPUs can help out on tasks with RSX? If you want to do that, you have to start copying data back and forth with RSX in order for Cell to see it. In the end, it might be a waste of time.
The biggest problem with Cell offloading vertex work is that RSX is limited to reading one 4x32-bit attribute per clock. Vertex shaders generally amplify data, feeding more data to the interpolators (i.e. input to the pixel shader) than they read from vertex buffers. If you need 4 iterators, you're stuck at 137.5 MVerts/s, regardless of how fast Cell can generate them. RSX can handle 32-instruction vertex shaders at this rate anyway, so I don't know how much you'll really save. It can get a lot worse than 4 iterators as well. FarCry, for example, couldn't render in one pass with PS2.0's 8 iterators, and this was why their PS3.0 performance patch was made.

It also appears that RSX can only read vertex data at 4GB/s from memory, since the attribute fetch goes through the command processor (see Dave's comment). Then you're constrained even more. If Cell generated a 6 attribute vertex, you're looking at 40M verts per second. Not much of an "assistance" really.

I think "Cell-RSX cooperation" will be used more for saving space than accelerating performance. You can sometimes compress geometry a lot with simple techniques. With a 16-bit height field, you could generate position, normal, and tangent with neighbouring values. Tesselation is another use.
 
RancidLunchmeat said:
When the reality of the situation is that the Inq actually provided us all with firm data that nobody before had access to. The fact that Inq is a bunch of morons and they misintrepretated the data, doesn't mean that the data itself isn't important.
Hmmm... I'm sure that slide circulated the web already during or after the "Devstation" event, or was that the "GDC".
I know I've seen that before, and I'm pretty positive it was already discussed then here too.

Yea, the data is good, but the article based on that data is no good, and there's really no reason to give the Inquirer any credit for some correct data they've got in their hands.
I mean it's the way these sensational tabloids work, misinterpret correct data to make it controversial and more interesting to the masses :)
 
Back
Top