Fillrate in the Next Gen Consoles

expletive

Veteran
Can someone explain to me what the fillrate is for the Xenos and the RSX (assuming a higher clocked G70)? ALso, what's the difference between gigpixels and gigasamples? How do these different specs come into play for graphics and how much does it matter at 1280x720?

Thanks in advance for any info!

John
 
Xenos - 4 Gigapixels - 16 Gigasamples/sec
RSX - 8.4 Gigapixels - 16.8 Gigasamples/sec

First number is the raw fillrate the second number the anti-aliased fillrate. It's possible for Xenos to reach it's values by virtue of it's eDram, the RSX would likely hit a memory bandwidth bottleneck first.

Fillrate is becoming less important as more and more work is being done on each pixel.
 
"the RSX would likely hit a memory bandwidth bottleneck first"

When you say "first" you mean before the Xenos? What impact would hitting this bottleneck have? Does it have a real world disadvantage?

How does 'more work being done on each pixel' mitigate the need for pixel fillrate?

Thanks for the great info!

John
 
expletive said:
How does 'more work being done on each pixel' mitigate the need for pixel fillrate?

You're going to be sending fewer pixels out to memory if you spend longer on each one. It's like anything, think of a car factory, given an equal amount of resources, if you spend more time on each car, your output is going to be smaller.
 
Titanio said:
You're going to be sending fewer pixels out to memory if you spend longer on each one. It's like anything, think of a car factory, given an equal amount of resources, if you spend more time on each car, your output is going to be smaller.


Wouldnt you have figured out a way to do more on a car in the same amount of time tho? Sorry i'm not really grasping the concept here. I feel like im being tedious here so maybe theres an article you can send me to that explains it all?

Also, how did you guys gome up with the RSX pixel fill rate?

Thanks!

John
 
Rockster said:
RSX - 8.4 Gigapixels - 16.8 Gigasamples/sec
That's really just an assumption. Fact is, we don't know for sure. Besides, that number is theoretical when doing just straight texturing/very basic pixel shading (no more than a handful of operations per pixel), and probably not indicative of how these chips will work in more mature titles released on these separate platforms...

It's possible for Xenos to reach it's values by virtue of it's eDram
...In theory. It still has to fight with the CPU for textures on the main memory bus, and as the CPU (theoretically) can consume all of main memory bandwidth... It's still just numbers on a page.

the RSX would likely hit a memory bandwidth bottleneck first.
If only doing more or less straight texturing, yes, but that's becoming less common even on PC games these days. At least the more prominent titles. Heck, even MMOs do pixel shading these days. With all the extra bandwidth available from Cell and XDR memory, it's hard to say to really be certain of who would run out of bandwidth before the other...
 
That's really just an assumption. Fact is, we don't know for sure. Besides, that number is theoretical when doing just straight texturing/very basic pixel shading (no more than a handful of operations per pixel), and probably not indicative of how these chips will work in more mature titles released on these separate platforms...
The post asked for fillrate assuming higher clocked G70. My point as it regards to memory bandwidth was not in the context of balance within an entire game, but rather solely on maximum potential fillrate. My final comment was to note that fillrate is not going to be the limiting factor in next-gen games, particularly at 720p.
 
Guden Oden said:
it's hard to say to really be certain of who would run out of bandwidth before the other...

I haven't benchmarked them but I'd be willing to bet my own money on which side of the fence that comparison would fall.
 
ERP said:
I haven't benchmarked them but I'd be willing to bet my own money on which side of the fence that comparison would fall.
How would your bet be affected by running shader programs of 50 to perhaps 100+ instructions per pixel? It's hard to tell for sure isn't it, when you haven't benchmarked anything. :)

Will in fact bandwidth really be any kind of limiting factor, if pixel shading is fairly, or even heavily utilized?
 
Guden Oden said:
How would your bet be affected by running shader programs of 50 to perhaps 100+ instructions per pixel? It's hard to tell for sure isn't it, when you haven't benchmarked anything. :)

Will in fact bandwidth really be any kind of limiting factor, if pixel shading is fairly, or even heavily utilized?

Bandwidth might be a limiting factor, more so on PS3 than Xenon.

But where it will likely hurt is the low complexity transparent polygons, mostly particles. FWIW if there hadn't been this premature obsession with HD this time around, I'd have said it was probably a none issue.

As for the long shader cases, it'll probably depend, on the shaders. It's difficult to predict shader performance without looking at how the shaders map to the underlying architecture. Clearly at the 50-100 clocks/pixel level framebuffer bandwidth is pretty much irrelevant, but how many of the visible pixels are actually doing that much work?

The majority of visible pixels (those that can't be Z culled) in most current games at least are transparent and simple.
 
Guden Oden said:
How would your bet be affected by running shader programs of 50 to perhaps 100+ instructions per pixel?
Those cases would be bottlenecked long before fillrate could become an issue, so pixel write bandwith wouldn't really be important either.
But like ERP said, there's still plenty of stuff that gets rendered that isn't doing any significant pixel work. Particles for instance require large amounts of transparent fillrate(with little to no relevant shader usage) and consequently bandwith - unless you can afford software rendering them.
 
Fafalada said:
Particles for instance require large amounts of transparent fillrate(with little to no relevant shader usage) and consequently bandwith - unless you can afford software rendering them.
And are probably particularly nasty when it comes to page breaks thanks to a very likely lack of coherency.
 
Let's calculate the bandwidth needed to support these fillrates. Assuming 32 bit color, 32 bit Z + stencil, gives us 8 bytes per pixel.

4.0 Gigapixels/sec * 8 = 32.0 GB/sec
8.4 Gigapixels/sec * 8 = 67.2 GB/sec

Once MSAA is turned on then compression comes into play, making bandwidth calculations tougher. Except on the Xenos, where we know that 4x MSAA can be achieved without an increase in bandwidth consumption.

The upshot is that the RSX (with it's 128 bit memory bus) doesn't need all 16 pixel outputs that the G70 has. 8 would handle any fillrate it's memory is capable of supporting.
 
Some of the most interesting footage and information came at the end of the piece, when Young talked about the latest iteration of the Medal Of Honor series for PlayStation 3, currently in development at EA Los Angeles. He revealed that, while still early in development, the PS3 version of the game was already fill-rate bound, leaving 4 SPUs of the PlayStation 3 ready to be used for code-powered effects such as physics, particles, AI, and so on.

He also advanced his theory that, while only 20% of the processing power would be used for processes other than rendering in the current generation, as much as 50% would be available for AI, physics, and other such tasks in the next generation

What did EA mean when they said that?
 
wrongdoer said:
What did EA mean when they said that?

It means they've been able to maximise fillrate usage without maxing out the rest of the system, thus fillrate has become the "bound". Nearly every game has a bound, whatever that be - fillrate can be a fairly common one. That means they have "spare" capacity elsewhere that they can use without affecting performance - EA's is not an entirely clear quote, but I think here they were talking about becoming fillrate bound without using the CPU too much, thus they can go back and pump up CPU usage without negatively affecting performance. It's a balancing act, if they really worked the CPU for example, it could become the bound.

Good to hear that they're using SPUs for AI :)
 
Back
Top