"2x the power of the GC," can someone clarify what this means? (ERP)

Vysez said:
I think the only that is clear is that common people can't tell the number of polygon pushed per framed.

RE4 didn't push more than 12 Million pps. Not even close.
I mean that would be saying that the game had more than half a Million Polygons per frame... That's more than a lot of next gen games.
I got that RE4 pushed >12 Mpps from here. I was just looking for an example of a Gamecube game that did more than the stated limit. The other example given on the page is Star Fox Adventures, weighing in at 15 Mpps.

Form factor should determine the window of heat dissipation, with better designs being higher and worse being lower. You can't equate Watts consumed to Pixels/Vertices processed, though.
 
OtakingGX said:
I got that RE4 pushed >12 Mpps from here. I was just looking for an example of a Gamecube game that did more than the stated limit. The other example given on the page is Star Fox Adventures, weighing in at 15 Mpps.

I wouldn't put any faith in the technical information in that article. 3.58mHz SNES vs "11 mHz" Megadrive (plus conveniently forgetting the graphics side of things) and the "62.5 mhz" N64?

Might as well just make up some polygon figures that you like the sound of and stick with those!
 
Short of a hardware analyser I would be extremly suspicious of any polygons/second numbers. I couldn't guess accurately and I doubt anyone elses ability to do so.

I would bet that neither RE4 or Starfox are anywhere near 12 million polygons/second.

Most in game polygon counts are a LOT lower than most of the guesses I see around and you'd be surprised just how much you have to increase polygon counts to get a visible difference.
 
Thanks for the info, if you dont mind

ERP said:
Short of a hardware analyser I would be extremly suspicious of any polygons/second numbers. I couldn't guess accurately and I doubt anyone elses ability to do so.

Do you know if there is any game one the GC that uses the 12 million polygons/second?

Most in game polygon counts are a LOT lower than most of the guesses I see around and you'd be surprised just how much you have to increase polygon counts to get a visible difference.

Inst that dependent on how do you uses the polygons, ie, 3x the polys per object and I doubt anyone can see a diference but with 3x the objects (eg enemys, trees, barrels, 3x more complex estructures etc... (this would be 3x the polys right?)) and I guess that people will start to see a diference?

BTW can you say how much is a a LOT? (eg I would guess up to D3 level which IIRC as about 100.000 polys as souce+ shadows/light)

Just to finish what do you would think that it is the best way to improve gfx?
 
pc999 said:
Do you know if there is any game one the GC that uses the 12 million polygons/second?

Well, Factor5 (Julian Eggbrecht) claimed 15 million polygons/sec with Rogue Leader in an interview, at least if my memory serves me right.
 
He claimed in an interview on IGN Insider that he got 18m out of Rebel Strike. You've gotta keep in mind that 30fps will divide your triangle count per second by 2. For me, it's hard to judge. Apparently, nAo says that GT4 does 3m polygons/sec using the Performance Analyzer, and I remember the original Ratchet & Clank did 7.5m (it was the maximum when Sony introduced the PA).
 
fearsomepirate said:
You've gotta keep in mind that 30fps will divide your triangle count per second by 2.

Not necessarely, a game running at 30 fps and 15 million polygons would be drawing an equal amount of polygons to a game running at 60 fps but with half the polygons per frame. Perhaps you ment it would devide your triangle count per frame by 2?
 
OtakingGX said:
I got that RE4 pushed >12 Mpps from here. I was just looking for an example of a Gamecube game that did more than the stated limit. The other example given on the page is Star Fox Adventures, weighing in at 15 Mpps.

Form factor should determine the window of heat dissipation, with better designs being higher and worse being lower. You can't equate Watts consumed to Pixels/Vertices processed, though.
Something about the RE4 pic comparisons in that link.The pic from the PS2 isnt even running real time.They made video files using the game engine and put them in the DVD due to performance limitations that would have downgraded the cut scenes.On the GC version they are real time though.The character models, textures, lighting effects etc during cut scenes from the GC remain exactly the same in gameplay, whereas on the PS2 there is an obvious difference.

Examples on PS2:
Water in the river(part where you find that monstrous fish creature) looks flat during gameplay.In cut scenes it doesnt.

Lighting effects are better during cut scenes, and limited during gameplay

Part where you get to see Krauser for the first time there are shadows casted on the characters similar to self shadowing

At the beginning of the game go and talk to the guards in the car that brought you to the area.Detail and especially their faces is greatly improved compared to the dull looking models during gameplay.

When Leon enters the village to zoom with his binoculars notice the texture details during the cut scene, and how they look during gameplay

Check the detail during the cut scene when the Napoleon looking short guy is swallowed by that plant like monster, then during gameplay zoom with the sniper to him.You ll see great diffrence

Final boss cut scenes-gameplay

Death of Louis (was that his name?).Clearly the PS2 is not capable of doing these graphics in real time

bh414%20GCN.jpg

GC real time graphics.Actual GC graphics


resident-evil-4%20PS2.jpg

PS2 video.Not running real time on PS2.Just a video being playied off the disc.
 
We might as well be playing guess the total production cost rather than 'What does 3x mean?'

My guess is $199 USD retail.

If you look at say $15 for the DVD-ROM, and $10 for assembly and packaging (just numbers I'm making up) and a $0 per unit profit (but not a loss) then that's roughly how much tech in $$$ I'm expecting.

For graphics though, since that's what we're discussing I'm expecting roughly 150m transistors, and a lot of specialized fixed units to make certain effects no performance hit, since they can really drill down on each pixel with a much lower fillrate to attain.
 
Phil said:
Not necessarely, a game running at 30 fps and 15 million polygons would be drawing an equal amount of polygons to a game running at 60 fps but with half the polygons per frame. Perhaps you ment it would devide your triangle count per frame by 2?

I meant that a scene with a certain amount of geometric detail will produce half the vertex rate running at 30fps that it will running at 60fps, or that two scenes with similar levels of geometry will have different polycounts per second by about 2x if one is 30fps and the other 60fps.

That article on gcadvanced has no credibility at all. They probably made up their polygon counts on the spot. I'm supposed to believe Halo 2 is pushing twice as much geometry as Ratchet & Clank while running at half the framerate? Sure, guys. It wasn't even a very well-reasoned editorial, either.
 
Last edited by a moderator:
18 million vertices a second is 18 million vertices a second, whether that's 60 fps or 1 fps. Frame rate allows more or less triangles per frame, but isn't going to affect poly per seond figures. Unless they came up with those figures by taking number of triangles per frame and multiplying that by 60 despite running at a lower than 60 fps framerate!
 
pc999 said:
Thanks for the info, if you dont mind
Do you know if there is any game one the GC that uses the 12 million polygons/second?

There is absolutely no way to tell, but from my experience I very much doubt it.
Game cube is extremly tolerant of coding style the difference between doing something the stupid way and spending hours pouring over the best way is pretty small.

BTW can you say how much is a a LOT? (eg I would guess up to D3 level which IIRC as about 100.000 polys as souce+ shadows/light)

Just to finish what do you would think that it is the best way to improve gfx?

Most guesses I see are well over 2x off some 5x plus.
Never played D3 so I can't comment.

As for improving graphics, it largely depends on the application. you really need close to 10x improvement in polygon counts to add noticeable detail to a lot of what's in current gen titles.

The example I always give is a window on a building face, Commonly it's just a texture on 2 polygons, to model it requires closer to 100 polygons and for the most part you have to be looking for it to notice the difference.

Similary draw two lines in photoshop that approximate what you see in silohettes, subdivide the lines until it's smooth enough to approximate the curve for you, count the final number of lines divide by 2 and SQUARE the number, and that's the increase in polgygon counts it would take to reach that degree of smoothness. example you think 4 secgments is enough increase in polygon count to go to 4 segments instead of 2 is roughly (4/2)*(4/2) that's a 4x increase, if it's say 8 segments to make it smooth then thats a 16x increase in polygons.

Detail just costs polygons, and none of this takes into account that as polygons get smaller GPU's get less and less efficient.

Normal and paralax maps help a lot with detail away from the edges, and when things are in motion edges are oftem much less of an issue.

There is still a lot of mileage in getting lighting right. I've always been a big fan of Namco's in that area, and Polyphonies GT series, the use of color and light is extremly good.
Great dynamic lighting is still a work in progress, but I'm sure we'll see strides in that direction this gen. It's really the first time we've been able to do significant math on a per pixel basis, so it should be interesting.
 
Look, what I was getting at and probably should have said explicitly is that people try to guesstimate polygon counts usually by looking at screenshots, but tend to forget about framerates when guessing, meaning that if they're taking as points of reference games with known polygon counts (such as Ratchet & Clank and other games known from what the Performance Analyzer was released, also Eggebrecht's claims of doing 15m in Rogue Leader) running at 60fps, their guesses will be off by at least a factor of 2. I dunno, I was trying to give some rationale for why people think games this gen are doing way, way more polygons than most of them are.

Geeze, forget I said anything.
 
ERP said:
There is absolutely no way to tell, but from my experience I very much doubt it.

12-15 million polygons displayed is spread out over the 60 frames per second refresh rate correct? So you have individual scenes of varying geometric complexity ranging from 200,000 - 250,000 polygons visible at any given time with these numbers at that framerate. Some recent acquaintances of mine had interviewed both Tosti & Eggrebrecht, they feel that F5 had no reason to inflate the stated numbers. What they are doing with the PS3 will be equally as technically impressive for the next-generation. (though I doubt their numbers will be met with the same skepticism) They are indeed tech. gods when exploiting & pushing a platform's abilities.

ERP said:
Game cube is extremly tolerant of coding style the difference between doing something the stupid way and spending hours pouring over the best way is pretty small.

Clarify this a bit further for me ERP, as self-shadowing & color tinting were features thrown in for free basically, & yet I can count on my hand the number of games that actually use ss. EMBM is hardwired into the T&L correct? RL, RS3, & FF:CC are the only games where I clearly saw it utilized. Per-pixel lighting? I'm aware of the associated costs these features extract, but the cost is not a true barrier to real-time software implementation.
 
Clarify this a bit further for me ERP, as self-shadowing & color tinting were features thrown in for free basically, & yet I can count on my hand the number of games that actually use ss. EMBM is hardwired into the T&L correct? RL, RS3, & FF:CC are the only games where I clearly saw it utilized. Per-pixel lighting? I'm aware of the associated costs these features extract, but the cost is not a true barrier to real-time software implementation.

SS is i no way free on GC.
EMBM costs in vertex rate just like everything else. Although it probably more an authoring issue than anything else. It breaks down in a lot of cases.

Developers measure polygon counts in interesting ways sometimes. For example I could imagine that during some parts of a frame where very simple small polygons are in use you might see 15 million PPS, but as a whole I'd be pretty surprised. They could also be doing a crappy job of frustum culling and counting submitted polygons rather than actually rendered ones.

I've run enough PS2 games through the PA, to know that dev claimed polygon counts are rarely close to realised counts.

I'm not calling any of them liars I'm saying you have to know what they were counting. In a lot of cases it's some static test of their rendering engine outside the game.
 
ERP said:
SS is i no way free on GC.
EMBM costs in vertex rate just like everything else. Although it probably more an authoring issue than anything else. It breaks down in a lot of cases.

Developers measure polygon counts in interesting ways sometimes. For example I could imagine that during some parts of a frame where very simple small polygons are in use you might see 15 million PPS, but as a whole I'd be pretty surprised. They could also be doing a crappy job of frustum culling and counting submitted polygons rather than actually rendered ones.

I've run enough PS2 games through the PA, to know that dev claimed polygon counts are rarely close to realised counts.

I'm not calling any of them liars I'm saying you have to know what they were counting. In a lot of cases it's some static test of their rendering engine outside the game.

Understood & thanks ERP. I should've been more clear regarding ss, not free, but easily realized technically. As nothing ever is completely truly free. Gamasutra:

"However, as mentioned above, a couple of features where added in automagically already, like self-shadowing and tinting for example."

"Per-object self-shadowing can be realized quite nicely on the Nintendo Gamecube. The benefit of doing self-shadowing on a per object basis is that one does not need to be concerned so much with precision."

"One should note that during the shader build many features are activated dynamically. For instance, if an object should get tinted a color multiplication is added to the final output color whatever shader was setup before."

"The results of global lighting can be computed in three different ways: per vertex, per pixel using emboss mapping, and per pixel using bump mapping. All three of these methods come in two variants one with self-shadowing and one without."--Florian Sauer & Sigmund Vik

http://www.gamasutra.com/features/20.../sauer_pfv.htm

My apologies for restating old tech. news, but given the associated cost/hit of self-shadowing seems minimal, & should've seen more widespread software implementation, no?
 
Last edited by a moderator:
SS worked great for the vehicles and stuff, but it was really glitchy and weird on the human characters, at least on the Escape from Bespin level in Rebel Strike.

Part of the problem I think is when making cross-platform games, devs don't spend a lot of time coding in unique graphical features that will only work on the Xbox and/or Gamecube, since their combined userbase is still less than half of PS2's. Almost every 3rd-party game on the Cube was cross-platform. Why bother coding features that are so unique in implementation that they can't be transferred cross-platform? It's not like any of these games had bump-mapping or self-shadowing on the Xbox, either. I do think this was kind of a mistake...if they'd put more effort into the technology of the Cube/Xbox versions, I think they would have sold better, much like NFL 2K5 did great on Xbox. Despite the success of Rogue Leader, developers just didn't seem that interested in really taking the hardware for a spin. If you go back and read the literature of the era, they were just really, really enchanted with the Xbox. So if you want to have fun with your graphics engine, design for Xbox. If you want lotsa sales, go to the PS2. That leaves the Gamecube as the middle child that no one really was interested in.

Geist, Monkeyball 2, Luigi's Mansion, Mario Sunshine, and Starfox Adventures all used some kind of bump-mapping that I saw; whether or not it's the emboss technique isn't something I'm sharp enough to recognize.
 
Last edited by a moderator:
fearsomepirate said:
Part of the problem I think is when making cross-platform games, devs don't spend a lot of time coding in unique graphical features that will only work on the Xbox and/or Gamecube, since their combined userbase is still less than half of PS2's. Almost every 3rd-party game on the Cube was cross-platform. Why bother coding features that are so unique in implementation that they can't be transferred cross-platform?

Then it behooves Nintendo to provide development support to get that extra polish for GCN/Revolution versions. Crossplatform development is here to stay so if Nintendo wants games on their platform to look as best as possible, they need to step up to the plate and help out wherever possible. It may not be feasible for every title but some is better than none.
 
Back
Top