PS3 GPU not fast enough.. yet?

ShootMyMonkey said:
Or lots of overdraw.

You shouldn't really have so much overdraw such that this could become an issue, unless you naively submit all or much of your geometry to the GPU though, no? Put that CPU to use and do some pre-VS HSR!

The geometry-heavy passes, I don't really know enough to comment. For the devs here, do you see the setup rate becoming a bound before something else would?

edit - thanks Dave, that does make sense.
 
xbdestroya said:
Charlie D in particular... I mean I'm not one for conspiracy theories, but this guy just has a naked agenda. Article after article from him...

He loathes Sony with a passion because of their draconian anti-copying and DRM schemes, that spills over into everything else Sony related.

But to be fair, he also seems to loathe Microsoft.

Cheers
 
Titanio said:
I'd question how much resources you'd have to spend per vertex or whatever if you actually tried to draw such a number of triangles on the screen. The only comment I can remember from a dev in this regard was actually ERP's original comment about the level of shading you'd be reduced to if you tried to max Xenos's. I think other things would become a problem much sooner than your setup rate if you actually tried to draw so many such that the setup rate itself would become an issue. I'd also wonder if it'd be possible at all if you have in any way decent HSR going on on the CPU..I mean if you had 4.5m visible triangles per frame or whatever, you'd be talking about sub-pixel triangles!
Your calculation is not allowing for two 1080P screens running at 120 fps. :devilish::LOL:

-aldo
 
Acert93 said:
....

Actually... a new thread on, "Polygons on Next Gen Consoles" would be a good/interesting thread if we could get some developer feedback. Any indepth discussion here is bound to scare away meaningful comments due to the title.

that's a good idea
 
Last year people were complaining about the 360's 500M polygon/setup limit. People were comparing it to the 1billion poly spec for the ps3. How does this 275M number affect that discussion?
 
Triangle setup doesn't actually draw anything. You're taking transformed vertices and their indices, figuring out and setting up the triangles for subsequent rasterisation. In the process some may be discarded, even, so no drawing really happens yet. Once the triangles are set up, they're rasterised, and then their fragments are shaded.
 
DUALDISASTER said:
that 500 million poly count for the 360...isn't that with the 360's gpu rendering poly's with no textures or pixels?

I believe the comment was "500 million with trivial shaders"... trivial meaning like, IIRC, "Xbox1 capabilities"-trivial..
 
Didn't ERP say somthing way back about Xenos being able to get close to that limit if using XBOX1 level shaders?

From Acert's post in the 'Chicks in 360's Armour thread:
"Obviously 500M is peak, but it is unlike past claims (which are usually degrees of mangitude greater than real life in-game performance). Xbox 360 is setup limited and the 500k number is said to be with non-trivial shaders. I believe ERP has said that some games, if using Xbox1 level shaders, may get close to the limit."
http://www.beyond3d.com/forum/showthread.php?p=500256&highlight=triangle+setup#post500256
 
Titanio said:
Neither does Sony's spec ;)
j/k of course. Your comment definitely put things in perspective.

So what use is it if X360 can do 500 million/sec or PS3 touted at 1.1 billion can presently only process 275 million? If 125 million triangles/sec is more than enough for 1080p at 60 fps, does this performance information reveal anything significant about real world performance comparisons between the two systems?

-aldo
 
DUALDISASTER said:
that 500 million poly count for the 360...isn't that with the 360's gpu rendering poly's with no textures or pixels?


nope. not from what most of us have read.

the 500 million polygon count of the Xenon's (Xbox 360's) GPU ( theXenos) goes all the way back to mid 2004. when tech info first came out. it's the triangle setup limit.

X360's 500M polygons/sec is with textured polygons, most likely with all features on (an old-school term) and with "non trival shaders" meaning probably, decent amounts of pixel-shading going on, I would think.

the Xbox360's Xenos GPU, if using ALL of its 48 shader ALUs simply for geometry transforms, could theoretically hit
6 BILLION vertices / polygons per second.

or I suppose if you want to count 1 vert, as 1/3 of a polygon, then 2 Billion polygons/sec.
that's raw computational performance. not what could be displayed on screen.

anyway, the Xbox 360's Xenos GPU is more likely to hit 500 million pixel-filled, textured, pixel-shaded polygons/sec with features on, than Xbox1 hitting ~116 million polygons/sec with its 233 MHz NV2A GPU.


now one or more of Beyond3D's real techheads can point out where i screwed up :p
 
Last edited by a moderator:
aldo said:
j/k of course. Your comment definitely put things in perspective.

So what use is it if X360 can do 500 million/sec or PS3 touted at 1.1 billion can presently only process 275 million? If 125 million triangles/sec is more than enough for 1080p at 60 fps, does this performance information reveal anything significant about real world performance comparisons between the two systems?

-aldo

I don't think so, neither in the real world is going to typically touch their peak setup rates, or be pushing that number of tris - games with trivial Xbox1-level shaders pushing 500m polygons is not typical, in fact I doubt you'll ever see it ;) I'd be curious if any dev thinks on either system if setup rate would ever typically be a bound.

(and this isn't a matter of "doing" x number of polys per sec..at least typically when people say that they refer to the transform rate, not the setup rate).
 
Last edited by a moderator:
Megadrive1988 said:
X360's 500M polygons/sec is with textured polygons, most likely with all features on (an old-school term) and with "non trival shaders" meaning probably, decent amounts of pixel-shading going on, I would think.

well, for starters you meant vertex shading ; )
 
Titanio said:
... if setup rate would ever typically be a bound.

275Mtris is 4.5 tris/pixel @1280x720x60fps.

The only thing that seem likely to be bounded by this is the noise caused by certain people concerning the 500Mtris/s setup limit on Xenon.

Cheers
 
Gubbi said:
275Mtris is 4.5 tris/pixel @1280x720x60fps.

The only thing that seem likely to be bounded by this is the noise caused by certain people concerning the 500Mtris/s setup limit on Xenon.

Ha, true :LOL:

I was actually trying to figure out how few cycles RSX would need to spend per vertex on average (during shading) to even supply enough to make that a problem, but it seems so low I can't reasonably think it would be.
 
Last edited by a moderator:
Gubbi said:
275Mtris is 4.5 tris/pixel @1280x720x60fps.

And at 1080p that means 2.2 triangles/pixel... the little problem is understanding if nVIDIA specs mean what the specs SCE used for the GS and many other vendors do as well mean 1 vertex = 1 triangle when rating their graphics processors else I would have to believe that the GS 's set-up engine would saturate at 225 MVertices/s (since they quote it at 75 MTriangles/s) ;).

I hope that I am getting this wrong and when nVIDIA says 2 cycles per triangle really means three vertices and not each vertex of a triangle list because RSX having a triangle set-up rate only 3.6x the one of GS seems lower than what people would expect from a console shipping in late 2006 at such a price-point (which it still worth what's inside the console for me though).

Even if these seem PRman like stats, I do not think that 2.2 or 4.5 vertices per pixel counting overdraw and multi-pass rendering (which sometimes you just cannot avoid) is that high amount. It reminds me of the answer SCE gave when asked about the fact that PSP's GE+GU do not support HW clipping on other planes besides the front plane... oh why worry... you have a coordinate system [0;0] till [4096;4096] which is MORE than enough and suddenly tons of developers rose in anger :p.
 
EndR said:
I believe the comment was "500 million with trivial shaders"... trivial meaning like, IIRC, "Xbox1 capabilities"-trivial..

Clearly it's 48 ALU ops/tri at that point, distribute them as you will.

We commonly gloss over the relative complexity of GPU pipelines, and attach to numbers for the bits we have on this forum. There are many other things than triangle setup and ALU/shader count, that will limit you in the pipleline.

These are all peak numbers, they have little to do with real applications.
 
Titanio said:
Ha, true :LOL:

I was actually trying to figure out how few cycles RSX would need to spend per vertex on average (during shading) to even supply enough to make that a problem, but it seems so low I can't reasonably think it would be.

Well running eight 16 cycles shaders at 550 MHz (and normally on NV4X/G7X chips we have seen the VS ALU's domain clocked a little higher than the quoted clock-speed) in parallel would produce exactly that amount and I do not think that all the vertex shading will be done only by those 8 VS ALU's or that SPE's will always be charged to rasterize and texture all polygons they run Vertex Shader programs locally on.

We know that Pixel Shading is where you will spend a HUGE amount of your frame-time on, but are we really assuming no T&L at all will be done on any SPE and if it gets done the SPE's will always also rasterize them ?

Last I heard Xenos' ability to churn through VS programs dedicating for some limited number of cycles up to all 48 Shader ALU's to the task did not seem so unuseful. So it is not like having a good amount of available Shader ALU's never pays off :p.
 
ERP said:
Clearly it's 48 ALU ops/tri at that point, distribute them as you will.

We commonly gloss over the relative complexity of GPU pipelines, and attach to numbers for the bits we have on this forum. There are many other things than triangle setup and ALU/shader count, that will limit you in the pipleline.

These are all peak numbers, they have little to do with real applications.

Sure, the beauty is to see if and by how much each set of numbers drop when adding real work done by "real applications" to it, but devs ere do not ant to play nice and give the numbers out ;). J/K, no jobs risked is better :).
 
Back
Top