"Taming The Dragon: Next-Generation Asset Creation for PS3" (Some Lair info)

You are right.

_phil_ said:
wow , again a derailling thread ,shame some people can't help their second nature to take over.
It's IGDA ,not some kiddy event or public e3 bable .It's a professional thing.What's the prob with 100-170k dragons ? if you plan to film them from near ,you need that data.Then you LOD them to 15 000 polys to display ten of them ,then 1500 polys if you want 100.
Big deal...


I apologize, I too played role in going off topic. Back to F5, I am sure they will develop a very efficient engine to display the large number of dragons. I havent seen many environment shots so I am curious how that develops as well as how close the in-game model will resemble cut-scene model.
 
im sorry. bump-mapping was used a lot in those games. but, because bump-mapping needs ex amount of instuctions, that's all you can have for effects. gamecube has the lowest fillrate this gen. so if textures exceeds the instruction count, fillrate would get sucked dry. and i wasn't talking about slowdown. the developers set certain level's framerate. levels have different framerates set by the developer. and yea LOD is use all the time. but when you talk about those game's poly counts, LOD make those counts even more irrelavant.
 
Lod

pixelbox said:
im sorry. bump-mapping was used a lot in those games. but, because bump-mapping needs ex amount of instuctions, that's all you can have for effects. gamecube has the lowest fillrate this gen. so if textures exceeds the instruction count, fillrate would get sucked dry. and i wasn't talking about slowdown. the developers set certain level's framerate. levels have different framerates set by the developer. and yea LOD is use all the time. but when you talk about those game's poly counts, LOD make those counts even more irrelavant.

I am sure Lair will use LOD, but using LOD will not diminish fact that it will transform and render far more polygons than any current gen game. Same goes for Rogue Squadron or Rebel Strike, F5's GameCube games, in relation to xbox games. Like in those games, I am sure F5 will efficiently work within hardware limitations to develop an engine that maximises use of shader effects, texture layers, polygons, lighting, shadows, etc... Given that they made the most all-round high performance graphics engine of previous generation, it is likely they will make one of the top PS3 engines.
 
all of those polys for one character. then theres the 10 textures just for one character. how many polygons ps3 can render at once?
 
PS3 polys

pixelbox said:
all of those polys for one character. then theres the 10 textures just for one character. how many polygons ps3 can render at once?

I think only the main character has 10 texture layers. Total ps3 polygon rendering will probably be setup limited so peak vertex transform capability is more academic than reflection of in-game polygon count but theoretical peak vertex transfor rate for one SPE is 800k vertices per second, almost as much as 7800GTX. If RSX has 8 vertex shaders like 7800GTX, then the GPU alone has 1.1M vertices per second, slightly less the R520, which is 1.25M. If all SPEs used for vertex transform in addition to GPU vertex shaders, total vertex transform capability purely from chip logic standpoint is amazing 6.7B vertices per second. But such numbers are meaningless when it comes to real game performance.

Look at PS2. Peak transform rate is 150M vertices per second, but actual peak poygon draw rate was only 20M polygons per second. There are many reasons why this is so but main reason is because actual vertex shader process much more steps than mere geometry transform and another is because in rendering step many polygons are removed. Geometry transform is merely benchmark of chip logic capability.
 
ihamoitc2005 said:
I think only the main character has 10 texture layers. Total ps3 polygon rendering will probably be setup limited so peak vertex transform capability is more academic than reflection of in-game polygon count but theoretical peak vertex transfor rate for one SPE is 800k vertices per second, almost as much as 7800GTX. If RSX has 8 vertex shaders like 7800GTX, then the GPU alone has 1.1M vertices per second, slightly less the R520, which is 1.25M. If all SPEs used for vertex transform in addition to GPU vertex shaders, total vertex transform capability purely from chip logic standpoint is amazing 6.7B vertices per second. But such numbers are meaningless when it comes to real game performance.

Look at PS2. Peak transform rate is 150M vertices per second, but actual peak poygon draw rate was only 20M polygons per second. There are many reasons why this is so but main reason is because actual vertex shader process much more steps than mere geometry transform and another is because in rendering step many polygons are removed. Geometry transform is merely benchmark of chip logic capability.

thanks nice info.

It might not be reflective of in game numbers but it's related and good to know. i.e. if peak verts go up in game numbers should as well. It gives me a better idea of what type of boost where talking about from the last gen to this one.
 
Guden,

So, which one is it, Roachy? You keep hemming and hawing about F5 not being trustworthy, but you don't have any proof to back up your claims. Time to step up to the plate, or quit making your insinuations.

and...

You'll find more than one company has made more or less dubious comments and praise about different hardwares at different points in time (last discussed in that team ninja thread), without it neccessarily sparking any particular reactions in Quincy here. He's got it in for F5, that's pretty clear to see. This time his comment was in direct response to various technical performance claims, not related to XNA or whatever.

I don't know what sour grapes you're talking about, but anyone with two eyes could see that Factor 5 had a habit of making claims about thier own games that were exagerated. look up past quotes for yourself. To be honest I really don't care if you agree or not.

You know what, both of your posts (not only what I quoted) were totally uncalled for! Both of your posts were flame bait. I have no idea why you're taking this personally, try not to derail the thread? thanks.
 
Last edited by a moderator:
ihamoitc2005 said:
Look at PS2. Peak transform rate is 150M vertices per second, but actual peak poygon draw rate was only 20M polygons per second.
There's nothing in PS2 that limits performance to just 20Mtris/s, that's just some arbitrary number you chose for some reason only you know... I'm pretty sure performance analyzer numbers has shown more than one title to exceed this amount.
 
Qroach said:
Factor 5 had a habit of making claims about thier own games that were exagerated.

Well i think their skills speak for themselves in the graphical and tech department when you look at RS2 and Rebel Strike. However the "PR" talk, the only i heard was that they like XNA and likes the Cell.
 
Guden Oden I'm pretty sure performance analyzer numbers has shown more than one title to exceed this amount.[/QUOTE said:
You're pretty sure? I thought RandC 2 was max around 20
 
ihamoitc2005 said:
As for frame-rate, you are right, at times frame-rate drops but very rare
Uh, no they weren't - especially the last one. Basically half the game ran at 30fps. The drops were rare in the space-fight scenes for the most part, but certainly not on any land missions or missions flying over a surface. And there were drops below 30fps in some levels as well.
 
PA numbers

Guden Oden said:
There's nothing in PS2 that limits performance to just 20Mtris/s, that's just some arbitrary number you chose for some reason only you know... I'm pretty sure performance analyzer numbers has shown more than one title to exceed this amount.

Yes, I apologize for not being clear, there is no reason why higher number cannot be achieved. It has been suggested that higher performance has been achieved and that might be quite true but reason I mentioned 20M polygons per second is because that is the highest Performance Analyzer confirmend performance that was made available to public.
 
Frame rate drops

Dave Glue said:
Uh, no they weren't - especially the last one. Basically half the game ran at 30fps. The drops were rare in the space-fight scenes for the most part, but certainly not on any land missions or missions flying over a surface. And there were drops below 30fps in some levels as well.

Yes some levels did seem to be slower but I had not as many problems with frame-rate as you did so perhaps this has something to do with the hardware. Some units might not perform as well as others even if identical due to age, over-use, abuse or manufacturing flaws.
 
ihamoitc2005 said:
Yes some levels did seem to be slower but I had not as many problems with frame-rate as you did so perhaps this has something to do with the hardware. Some units might not perform as well as others even if identical due to age, over-use, abuse or manufacturing flaws.

Thats a big no! :) The difference lies in the eyes of the one who is playing the game.
 
Eye of beholder not only issue

overclocked said:
Thats a big no! :) The difference lies in the eyes of the one who is playing the game.

Actually manufacturing flaws or "variation" can make a big difference. Why do you think so x number of pipes in GPU are "disabled" or PS3 CELL has only 7 functional SPEs? Because although units are identical in design, manufacturing process is such that it is not possible to have perfect part duplication so natural variation occurs. This is not just true of microchips. Also true in automobiles, or anything else that is manufactured. Variation in quality of final product is unavoidable problem unfortunately so margin of error must be accounted for in design to limit performance variation. But more complex the system is the more difficult and costly it is to limit performance variation in finished product.
 
The most variation you're going to get in a console in +/- a few Hz on the clocks. They test components and don't use those which don't work properly. You're not going to get an XB with one of it's pixel shaders out of action, or a GC with 256 kb of RAM out of action, due to manufacturing faults. Too all extents and purpose hardware should be identical. Certainly there isn't going to be enough variation to produce ANY discernible difference between machines.
 
Many clocks

Shifty Geezer said:
The most variation you're going to get in a console in +/- a few Hz on the clocks. They test components and don't use those which don't work properly. You're not going to get an XB with one of it's pixel shaders out of action, or a GC with 256 kb of RAM out of action, due to manufacturing faults. Too all extents and purpose hardware should be identical. Certainly there isn't going to be enough variation to produce ANY discernible difference between machines.

For any one component what you say is true, but for system with many different clocks this is not so easy so latency variation develops and what seems like small difference and show small measurable difference in normal running can have unpredictable behavior at limit.
 
Perhaps I'm not educated enough to really know better, but that sound like totally utter bunk rubbish gibberish nonsense. The difference's of any components will be +/- 1% which overall causes a difference of...+/- 1%. Unless you can show me official reports of some consoles playing a game well and other playing it badly and the reason being these little variations, I won't believe you.
 
Belief

Shifty Geezer said:
Perhaps I'm not educated enough to really know better, but that sound like totally utter bunk rubbish gibberish nonsense. The difference's of any components will be +/- 1% which overall causes a difference of...+/- 1%. Unless you can show me official reports of some consoles playing a game well and other playing it badly and the reason being these little variations, I won't believe you.

I dont know what you mean by "official" reports but look at reviews for certain games will give you enough information for you. You dont have to believe me. I am sure you have seen reviews where one review will say certain game has slowdown and and another review will say same game has no slow-down.

Closed box nature of game design for consoles mean game situations will sometimes be at limit of hardware and at limit is where most unpredictable variation has opportunity to express itself. I am sure if you look at it mathematically you will agree that it is virtually not possible for unpredictable performance variation to not exist at limit.
 
Actually, I have seen performance variation in games between different gamecubes, so it does happen.

Feel free to disbelieve me too, though. :p
 
Back
Top