Qroach said:Nintendo never gave out theoretical numbers. That's the point I'm making.
Qunicy,
Nintendo did release the 40 Mio. Poly/sec unlit number. I don't know to what extent this differs from a theoretical[/i ] output.
Qroach said:Nintendo never gave out theoretical numbers. That's the point I'm making.
GwymWeepa said:I've seen a large swath of games from each platform, though some ps2 games can compete with anything on the other two machines, most fall short from my experience.
GwymWeepa said:The ps2 had 16 pixel pipelines, only now is that being matched by videocards, its video memory badwidth was over 40GB/s, which has yet to be matched...but what does that give you? Sub-gamecube looking graphics...I don't know wtf they did with the thing internally, on paper ps2 was going to be a monster lol.
This must be the ea canada article, but apparently I was way off with the numbers....could have sworn I heard those numbers somewhere though, but Xbox and PS2 aren't even mentioned in this.
"Gamecube development hardware running with eight texture effect layers + all other effects on: Approximately five million polygons per second
Gamecube development hardware running with four texture effect layers + all other effects on: Approximately 14 million polygons per second."
Wording regarding the remaining benchmark tests was vague, but evidently the company also did experiments with the Gamecube development hardware running at least four hardware lights and other effects with impressive results of approximately 17 million polygons per second. Sources we spoke with said this is not only entirely possible, but highly conservative.
So all effects + 8 texture layers = 5 million polygons
All effects with only 4 texture layers = 14 million(wow, it takes a bit of a dive at 8..memory limited maybe?)
And 17 million under what probably could be in game conditions.
BTW, early ea games on gamecube were sometimes better than the ps2 version, sometimes not. Agent under fire was much better, and nightfire was better too.
Still pretty sure I remember an artificial graphical demo out of EA that did 25 million pps though....
http://gameztech.8m.com/consolewar1.htm
From here....
Quote:
Also, developers have recently stated that the Gamecube can push more 20 million polygons per sec.
http://cube.ign.com/articles/094/094556p1.html
Here's a factor 5 article, just thought it was interesting how the only thing he noted gamecube had over xbox was memory bandwidth. (which means better textures) Memory access times too.
http://www.segatech.com/gamecube/overview/
Scroll down a bit and there's a blurb about factor 5 stating they could do 20 million polygons per second with all effects, with effects standing for texture layers and not actual effects.(umm...does gamecube have a physical max for texture layers?) Maybe at its old clockspeed it could do 25? (but then it'd have a bottleneck in the cpu...)
The basics of this PPC 750CXe derivative (codenamed Gekko) are fairly simple; the PowerPC core features a 4-stage basic integer pipeline which is mostly responsible for the very low clock speeds the core is able to achieve. Most important for gaming performance however are more precise floating point calculations and the Gekko's floating point pipeline is 7 stages long. Since the Gekko is a native RISC processor it does not suffer the same fate as its Xbox counterpart in that it doesn't have to spend much time in the fetch/decoding stages of the pipeline. Immediately upon fetching the RISC instructions to be executed, they are dispatched and one clock cycle later, they are ready to be sent to the execution units.
The PowerPC architecture is a 64-bit architecture with a 32-bit subset which in the case of the Gekko processor, is what is used. The CPU supports 32-bit addresses and features two 32-bit Integer ALUs; separate to that is a 64-bit FPU that is capable of working on either 64-bit floats or two 32-bit floats using its thirty two 64-bit FP registers. This abundance of operating registers is mirrored in the 32 General Purpose Registers (GPRs) that the processor has, dwarfing the Xbox's x86-limited offering (8 GPRs).
In the case of the GameCube, the CPU is clocked at 485MHz, or 3 times its 162MHz FSB frequency. The benefit of a shorter pipeline is of course, an increased number of instructions that can be processed in those limited number of clocks.
The role of North Bridge is played by Flipper in that it features a 64-bit interface to the Gekko CPU running at 162MHz. The entire Flipper chip runs at 162MHz which lends itself to much lower latency operation since all bus clocks operate in synch with one another.
Based on the operating frequency of the core (162MHz) you can tell that the Flipper graphics core isn't a fill-rate monster, but what it is able to do is portray itself as a very efficient GPU. The efficiency comes from the use of embedded DRAM.
The 2MB Z-buffer/frame buffer is extremely helpful since we already know from our experimentation with HyperZ and deferred rendering architectures that Z-buffer accesses are very memory bandwidth intensive. This on-die Z-buffer completely removes all of those accesses from hogging the limited amount of main memory bandwidth the Flipper GPU is granted. In terms of specifics, there are 4 1T-SRAM devices that make up this 2MB. There is a 96-bit wide interface to each one of these devices offering a total of 7.8GB/s of bandwidth which rivals the highest end Radeon 8500 and GeForce3 Ti 500 in terms of how much bandwidth is available to the Z-buffer. Z-buffer checks should occur very quickly on the Flipper GPU as a result of this very fast 1T-SRAM. Also, the current surface being drawn is stored in this 2MB buffer and then later sent off to external memory for display. Because of this, dependency on bandwidth to main memory is reduced.
The 1MB texture cache helps texture load performance but the impact isn't nearly as big as the 2MB Z-buffer. There are 32 1T-SRAM devices (256Kbit each) that each has their own 16-bit bus offering 10.4GB/s of bandwidth to this cache.
GwymWeepa said:I haven't seen ZOE 2, I've been wanting to though. But anyhoo, PS2 from what I've seen wouldn't be able to handle Ninja Gaiden, but some games really take art design and fudge nearly as impressive graphics, like the Jak series.
PC-Engine said:Rygar on PS2 is the closest to NG you'll ever get. Both are by Tecmo.
PD IS a benchmark. Pretty much like 3Dmark, only you get to shoot things every now and then.
*DUCKS*
PC-Engine said:Rygar on PS2 is the closest to NG you'll ever get. Both are by Tecmo.
IMO Onimusha 3 looks like a better "PS2 NG" canditate.PC-Engine said:Rygar on PS2 is the closest to NG you'll ever get. Both are by Tecmo.
Deepak said:PC-Engine said:Rygar on PS2 is the closest to NG you'll ever get. Both are by Tecmo.
You mean to say that Rygar is best looking PS2 game?
london-boy said:Deepak said:PC-Engine said:Rygar on PS2 is the closest to NG you'll ever get. Both are by Tecmo.
You mean to say that Rygar is best looking PS2 game?
No way! Just meant it's the clostest kind of game u can find on PS2. It's really an average game on its own...
And although this article by Anandtech I find to be ill-informed regarding the GCs specifics, (like the TEV, operations like hw lights done in parallel to other functions, etc.) & skewed in the X-Box's favor, it still makes some interesting points regarding efficiency:
This is design engineered efficiency, not cost efficient.
The X-Box's major advantage over the GC is its programmable vertex shader, & larger RAM pool. Raw poly output means nothing until effects are applied to those polys. (shaded, textured, lighted, shadowed, self-shadowed, bump-mapped, etc.)
The point is, given more R&D time & access to basically the same partners & possibly slightly newer technology, why wouldn't/couldn't the Revolution be technically superior to Xenon? You attempting to justify this by using Nintendo's past console efforts is moot. Iwata is at the helm now, not Yamauchi.
rabidrabbit said:GCN 'AAA' games look good, polished and they run fast(er than xbox 'AAA' games)