New Technical Information About RSX???...But In Japanese

Just to finish that sentence... ;)

True but now on GAF they are speculating threading troubles refers to CPU stalls. So the part about lacking in shader alu performance would be seperate (and a more concrete problem then imo) from having troubles with CPU threading stalls (6 threads and all that) if that interpetation is correct.

Is it more likely this applies to the CPU or GPU? I'd assume it could apply to the GPU because isn't Xenos pretty heavily threaded, and also hasn't small cache been tabbed as a problem with Xenos before? So lack of threading resources could be reffering to cache, registers, and the like on Xenos, or it could be reffering to the CPU, but most likely is reffering to Xenos in my opinion, given the context of the article.
________
Buy bubblers
 
Last edited by a moderator:
Just to finish that sentence... ;)
What is there to get familiar with? Certainly with the Xenos architecture itself, but there shouldn't be anything inherent about a unified architecture that takes getting used to; it should just work. Any architectural issues that developers need to get used to should be unrelated to that point, and simply based on the specific architecture/quirks of the GPU.

Now the compiler might be a troubling link in the chain. ATI has had quite a long time with their previous architecture, so much so that certain parts carried over into the R5xx series, according to ATI, just for the sake of working well with the compiler they already had. So I can imagine that the Xenos compiler is still quite immature.

But like Laa-Yosh said, not really anything that hasn't been discussed here before. If there's a general bottleneck with RSX, concensus seems to be on memory bandwidth. And for Xenos, not so much, meaning it's shader rate-limited.
 
But like Laa-Yosh said, not really anything that hasn't been discussed here before. If there's a general bottleneck with RSX, concensus seems to be on memory bandwidth. And for Xenos, not so much, meaning it's shader rate-limited.

True, but you neglect that Xenos probably simply has somewhat LESS shader power altogether than RSX. That's been a longtime educated guess on these forums.
________
Unique Bowl
 
Last edited by a moderator:
True, but you neglect that Xenos probably simply has somewhat LESS shader power altogether than RSX. That's been a longtime educated guess on these forums.
Being bottlenecked in one area or another doesn't automatically imply more/less power, so I don't see how I neglected any power difference there... one which most have also agreed is fairly negligible either way, too. But that's a discussion that's been had and killed and beaten and might even be discouraged by the moderators...
 
Being bottlenecked in one area or another doesn't automatically imply more/less power, so I don't see how I neglected any power difference there... one which most have also agreed is fairly negligible either way, too. But that's a discussion that's been had and killed and beaten and might even be discouraged by the moderators...

The shaders performance is directly compared in the article. It's a context of the differing architectures compared to each other, not in a vacuum.

It jives well with what my own personal views have been:

RSX: More shader power

Xenos: More framebuffer bandwidth

In the end:

Somewhat similar performance.
________
Live sex
 
Last edited by a moderator:
The shaders performance is directly compared in the article. It's a context of the differing architectures compared to each other, not in a vacuum.

It jives well with what my own personal views have been:

RSX: More shader power

Xenos: More framebuffer bandwidth

In the end:

Somewhat similar performance.

That's not what the article says though.
 
It is what it implies, though :)

I think it implies that there's a learning curve between both systems but in different places. They expect devs to become more proficient in utilizing the hardware to the point where there isnt an obvious difference between the two.
 
From what I am understanding by reading all this, is that all the games at TGS should have had a high level of suckiness.

What happened?
 
The shaders performance is directly compared in the article. It's a context of the differing architectures compared to each other, not in a vacuum.

It jives well with what my own personal views have been:

RSX: More shader power

Xenos: More framebuffer bandwidth

In the end:

Somewhat similar performance.

Exactly...all you need to know is that the GPUs are similar performance-wise (a wash, really), but apples to oranges architectually.

I think CPU power and input medium configuration will differentiate one piece of kit over the other this generation.
 
“For lower resolutions it is a fantastic GPU, but it gets difficult for high end HDTV resolutionsâ€￾, says a developer.


This part is pretty interesting, does this mean that developers have to make huge graphical sacrifice in order to display PS3 games in HD?
 
Exactly...all you need to know is that the GPUs are similar performance-wise (a wash, really), but apples to oranges architectually.

Not that it matters (beaten to death), but as more time goes on I see it the reverse and feel like I am looking at R300 and NV30 all over again (R300 had an edge in bandwidth and in DX9 performant features). Many reputable review sites saw it as a wash, if not a nod toward NV30, until featureset performance was revealed as time went on. Dynamic Branching in Pixel Shaders, framebuffer bandwidth, etc may not mean much in 2006 when most developers are still leveraging designs and technologies targetting older consoles and low end PC hardware, but some day that will change.

But of course I see the CPUs also being quite relevant (both in architecture and performance), as well as other architectural decisions (e.g. UMA/NUMA) and think these various decisions have different impacts depending on wehther the software is big budget, a port, multiplatform, etc... no really easy shoehorn.
 
Not that it matters (beaten to death), but as more time goes on I see it the reverse and feel like I am looking at R300 and NV30 all over again (R300 had an edge in bandwidth and in DX9 performant features). Many reputable review sites saw it as a wash, if not a nod toward NV30, until featureset performance was revealed as time went on. Dynamic Branching in Pixel Shaders, framebuffer bandwidth, etc may not mean much in 2006 when most developers are still leveraging designs and technologies targetting older consoles and low end PC hardware, but some day that will change.

But of course I see the CPUs also being quite relevant (both in architecture and performance), as well as other architectural decisions (e.g. UMA/NUMA) and think these various decisions have different impacts depending on wehther the software is big budget, a port, multiplatform, etc... no really easy shoehorn.

You can't tap performance that isn't there.

RSX has more shader power and developers are using XDR and FlexIO to overcome bandwidth bottlenecks.

Xenos has the advantage of EDRAM for bandwidth (although it has less GPU to CPU bandwidth) but less shader power.
 
Acert is speaking though to the ability of R500 to more readily adapt and adopt some of the more 'sophisticated' pixel shading directions things will start to go in when DX10 hits... so it's not that he's saying Xenos is suddenly more powerful, simply that there are some shading instructions it may be able to handle that RSX simply won't be able to.

“For lower resolutions it is a fantastic GPU, but it gets difficult for high end HDTV resolutions”, says a developer.


This part is pretty interesting, does this mean that developers have to make huge graphical sacrifice in order to display PS3 games in HD?

The same sacrifices you would expect your GPU at home to make in reaching double the resolution; no more, no less. And that will of course vary situation to situation (and there are threads dedicated to exactly that topic I'm sure you can search.) For example...
 
Last edited by a moderator:
“For lower resolutions it is a fantastic GPU, but it gets difficult for high end HDTV resolutionsâ€￾, says a developer.


This part is pretty interesting, does this mean that developers have to make huge graphical sacrifice in order to display PS3 games in HD?

I guess "graphical sacrifice" is somewhat subjective. For some people a 50% drop from 60fps to 30fps would be a huge graphical sacrifice. While others don't care or even notice.

Lair running at 1080p at TGS did not exactly scream huge graphical sacrifice. It could have been smoother. But F5 seems to believe they can get it running at full speed at 1080p so I guess it just depends on who you talk to. And we have to give them the benifit of the doubt that they know what they are doing, and that they know what the right thing to do is for their game.

At the sametime, no one ever expected it to be easy for devs to hit "high end HDTV" resolutions. In fact most thought it was totally impossible. But as usual, smart people often figure out ways to make the seemingly impossible happen.
 
You can't tap performance that isn't there.

RSX has more shader power and developers are using XDR and FlexIO to overcome bandwidth bottlenecks.
Differences are more than just that. Stick a PS on RSX with dynamic branching and see how it's 'more shader power' fares against Xenos with the same shader...

Performance is but one metric. Features is another.
 
Back
Top