Will next-gen consoles hit the 10G pixel barrier?

It looks like the new R420 might touch 8G pixels if it indeed has 16 pipelines and a 500 Mhz core clock.

As far as we know, ATI is still developing two seperate console graphics processors. one of those consoles is likely to hit the market late next year. The other, in 2006. Both should, in theory, be more powerful than the new R420-which had to have been developed mostly in 2003. maybe starting in late 2002 (R420 development time). The two console chips were probably started in 2002. the console coming sooner probably wont be finished until early next year. the one coming in 2006, probably late next year or early 2006. so again, both of those consoles VPUs are newer than R420.

if R420 does hit 8G pixels, or close to it, it wouldn't take much more to hit the 10G pixel barrier. they don't even need to double the pipelines again, from 16 to 32. You could get away with 20 ~ 24 pipelines. Or, just boost core clockspeed to 650 Mhz or so, while keeping the console VPUs to just 16 pipes.

now while console graphics processors have different requirements than PC graphics processors, the basic amount of pipes is often similar. i.e. NV2A and Flipper each had 4 pipes in 2001. so did all PC graphics processors (i.e. NV20, R200).

If PC graphics processors have 16 pipes this year, it's pretty much certain that new consoles coming in 2005, 2006 will have 16 pipes or more, in their graphics processors. agree?

so again my question is, do you not think that the nextgen consoles will reach the 10G pixel barrier ;)
 
OK to turn it around, would you rather have 10 Gpixels of fillrate or half of that with 3x the pixel shading power?

I'm not sure what the tradeoff would actually be, but you get the point.... Fillrate isn't everything.
 
35 nm q4 97 90Mhz (Voodoo 2) 2 texel units (35 nm Max mhz?)
25 nm Q1 99 183Mhz (Voodoo 3 3500) 2 pixel units (25 nm Max mhz)
18 nm Q1 00 250Mhz (Nvidia GTS2 Ultra) 4 pixel units (18 nm Max mhz)
15 nm Q3 01 275Mhz (ATI Radeon 8500 Pro) 4x2 pixel units
15 nm Q3 02 412+(o.c.)Mhz (ATI Radeon 9800XT) 8 pixel shaders (15 nm Max mhz)
13 nm Q1 03 500Mhz (Nvidia GFFX 5800 Ultra) 4 pixel shaders
Propably We will see 13 nm Max in Q3-Q4 04 (Max<600Mhz) since in Q1-Q2 05 will have 11nm (Q2 04 for ATI mainstream parts) or 9nm on high end parts (500$-300$).So:
13nm Q2 04 500Mhz (Nvidia GF 6800 Ultra or ATI X800 XT) 16 pixel shaders
13nm Q2 04 600Mhz (ATI X900? XT) 16 pixel shaders
If these numbers confuse you let me clarify that almost every 18 months the number of pixel shaders is doubling and in the worst case it stays the same no more than 30 months.
So in Q3 05-Q4 06 it is certain that XBOX2 (DX 10 or DX 10.1) and Gamecube (Open GL 2) will have 32 pixel shaders.
Also if you check the MHz scaling in relation with micron technology it is quite possible to have 650-667Mhz (Minimum) in 9 nm or 1Ghz (Minimum) in 6,5 micron (Although in 2006 6,5 nm it wont be mature enough):
18 nm 166Mhz (min) Gamecube (162.5Mhz)+Radeon (166Mhz)
15 nm 250Mhz (min) Xbox (233Mhz-Nvidia sucks)+Radeon 8500 (250Mhz)
13 nm 333Mhz (min) Radeon 9600 non Pro 325Mhz
11 nm 500Mhz (min)
9 nm 667Mhz (min)
7,5 nm 1000Mhz (min)
6,5 nm 1333Mhz (min)
So it is quite possible to have 667x32=21G+ or 1333x32=42G+ pixel.
 
OK to turn it around, would you rather have 10 Gpixels of fillrate or half of that with 3x the pixel shading power?

I'm not sure what the tradeoff would actually be, but you get the point.... Fillrate isn't everything.

I am not sure either, about trading half the fillrate for 3x the pixel shading power, but i WOULD take 3/4ths to 4/5ths of 10G pixels and 3x the pixel shading power :)
 
Don't think you need that kind of fill-rate, but the more pipelines, the more pixel shaders, and so more work gets done. The quality of the pixels goes up.
 
Edge said:
Don't think you need that kind of fill-rate, but the more pipelines, the more pixel shaders, and so more work gets done. The quality of the pixels goes up.

Not necessarily, they could just have 8 pipelines with 8 shader units per pipe. That would give them all the pixel fillrate they need and plenty of shading power.
 
Not necessarily, they could just have 8 pipelines with 8 shader units per pipe. That would give them all the pixel fillrate they need and plenty of shading power.

I didn't think of that. I guess anything is possible.


more likely though, rather than taking a step back (going down to 8 pipes)
both console graphics processing units will have 16 pipes and 4-8 shader units per pipe.
 
well if the rumors are true the r500 may break it the end of this year or early next year as we should now expect it much sooner than originaly thought
 
I really doubt the R500 will make it out this year. up to recently, R500 was expected around fall 2005. I could see it being moved up to spring 2005, but probably not this year. then again, almost anything is possible.


If R420 was getting beaten severely by NV40 (highly unlikely) then I could see ATI rushing the R500 into action ASAP.
 
Edge said:
Don't think you need that kind of fill-rate, but the more pipelines, the more pixel shaders, and so more work gets done. The quality of the pixels goes up.

Well, as the number of pixels to be drawn in console games are fairly limited compared to PC games, even at HDTV res, having lots of pixel pipes to gain a large amount of pixel shader units is not as efficient as having fewer pipes each equipped with more pixel shaders.

Each pixel pipe is essentially overhead when you want good-quality pixels; they're the packaging around the candybar. If you deliver each snickers bar packaged in a half-ton safe, you're not delivering as many candybars with each truckload as if each was wrapped in a thin sheet of plastic. It's the same thing here; the transistors spent on the pixel pipe itself could find a better use in more shader units.

Also, remember that lots of pipes will stand idle at the edge of polygons; more the smaller polygons get as there will be more and more edges, thus reducing efficiency still for the setup with many pipes...
 
Each pixel pipe is essentially overhead when you want good-quality pixels; they're the packaging around the candybar. If you deliver each snickers bar packaged in a half-ton safe, you're not delivering as many candybars with each truckload as if each was wrapped in a thin sheet of plastic. It's the same thing here; the transistors spent on the pixel pipe itself could find a better use in more shader units.

Also, remember that lots of pipes will stand idle at the edge of polygons; more the smaller polygons get as there will be more and more edges, thus reducing efficiency still for the setup with many pipes...

nice explaination. now I sort of understand that we don't need dozens of pipes. more shader units / FP units per pipe is better then...
 
Guden Oden said:
Edge said:
Don't think you need that kind of fill-rate, but the more pipelines, the more pixel shaders, and so more work gets done. The quality of the pixels goes up.

Well, as the number of pixels to be drawn in console games are fairly limited compared to PC games, even at HDTV res, having lots of pixel pipes to gain a large amount of pixel shader units is not as efficient as having fewer pipes each equipped with more pixel shaders.

Each pixel pipe is essentially overhead when you want good-quality pixels; they're the packaging around the candybar. If you deliver each snickers bar packaged in a half-ton safe, you're not delivering as many candybars with each truckload as if each was wrapped in a thin sheet of plastic. It's the same thing here; the transistors spent on the pixel pipe itself could find a better use in more shader units.

Also, remember that lots of pipes will stand idle at the edge of polygons; more the smaller polygons get as there will be more and more edges, thus reducing efficiency still for the setup with many pipes...

Not unless you are grouping the pipes with different polygons: why do you think current next-generation ATI and nVIDIA cars seem to go with 16 pixel pipelines.

You can shade pixels in parallel from different triangles: not all 16 pixel pipelines are going to be working on the same triangle ;).
 
Pana,

Even if the pipes are split up in sub-groups you'll still have pipes doing nada at the edge of polys. If poly sizes starts to approach 1 pixel that means efficiency will plummet through the floor.

So far, no chip's able to work on two or more different polys in the same sub-group, no reason to believe this change will happen anytime soon...

Also, like I said, PC chips generally render at higher resolutions than consoles will, so more pipes is the way to go there (polys get larger at higher res too, so not as much edge waste). After all, there's only so many pixels that realistically can be filled on a TV screen.
 
Back
Top