Panajev2001a
Veteran
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
PC-Engine said:6-12 textured + lit Mpolys/sec ingame with AI, physics, etc. has more useful information than 66 Mpolys/sec untextured + lit non ingame numbers![]()
It's pretty obvious that Nintendo's numbers are more realistic if conservative compared to SONY's. Wouldn't it be funny if Nintendo were to go back and change their numbers to 70 Mpolys/sec gouraud shaded non ingame polys?![]()
![]()
DeanoC said:The Painters algorithm is a valid (and fairly good) form of HSR (hidden surface removal), the PS1 was fairly unique in having hardware acceleration of it. The ordering table hardware would still be handy today in a few situation. The GTE also rocked... best true coprocessor (rather than seperate co-CPU like device) ever in a console/computer.
Not using a Z-buffer saved a fair chunk of VRAM (128K) and a lot of bandwidth, also the subdivision used for perspective correction also helped reduce z-fighting errors.
And also, can PS2 still use the Painters algorithm, in hardware or software? It could be usefull in some game where it would save the z-buffer memory.
8)Hold ye horses! If PSone has an ordering table, that means PS2 must have it to (for emulation purposes)! Wouldn't that in turn mean that you could do cost free polysorting and thereby be able to use the hardware antialiasing, that there was so many complaints about when PS2 was launched?
I realise that above most likely is BS
Far too much I'm afraid. Using triangle strips, effectively impossible, or time consuming enough that you'd be running in double figure seconds per frame.How much does it cost anyway to Z-sort polys in software?
If you're thinking per poly occlusion, you just doubled the triangle strip problem.I can see two major advantages to that: 1, it would be easier to do occlusion culling and thereby saving a lot of fillrate...
If you really want edge AA it's possible to get fairly good results with just macroscopic sorting of vertex chunks and bf culling. The most recent example of it that I've seen is J&D2, which uses it on characters in cutscenes. (maybe other parts too but I can't tell that from the demo).and 2, it would be possible to have the build-in edge AA turned on always. Am I right?
Fafalada said:If you're thinking per poly occlusion, you just doubled the triangle strip problem.
Moreover, while this is application dependant, generally on PS2 it's a bad idea to add more load to the CPU(r59k) as it's the most likely component to slow the rest of the system down.
Squeak said:Hold ye horses! If PSone has an ordering table, that means PS2 must have it too (for emulation purposes)! Wouldn't that in turn mean that you could do cost free polysorting and thereby be able to use the hardware antialiasing, that there was so many complaints about when PS2 was launched?
There are better ways of doing PA on the PS2 than a PS1 style OT table (using DMA packets), but your still left with the classic issue that Painters isn't as visually apealing as a Z-Buffer. Painters can't handle the cases where there is a cycle in the back to front ordering. There are a few places its useful (Particles sorting, etc) but basically its day is gone. We have very cheap Z-Buffers, which in 9 times out of 10 are better than Painters. The 1 out of 10 case where Painters wins is almost always to do transparency (like edge AA etc), which a Z-Buffer can't handle.Squeak said:I realise that above most likely is BS, so that leads me to ask the follow-up question:
How much does it cost anyway to Z-sort polys in software? I can see two major advantages to that: 1, it would be easier to do occlusion culling and thereby saving a lot of fillrate and 2, it would be possible to have the build-in edge AA turned on always. Am I right?
Squeak said:In the Kill Zone thread it was mentioned that ray casting was being used in the game. Could this be for occlusion culling purposes?
And also, can PS2 still use the Painters algorithm, in hardware or software? It could be usefull in some games, where it would save the z-buffer memory.
Off the top of my head, yeah it would fit, although it'd take some work to keep your occluder data within constraints of VU memory and you'd need partial test&sort of vertex batches on CPU side.Okay, but wouldn?t something like for example, Bounding Boxes or Convex Hull occluders, for view frustum culling, fit ideally within the VU1s domain (math-wise)?
hehe... it is true though that if you really needed per triangle sort badly you could do a similar type of list sort reasonably fast on VU within each vertex batch(as long as you're already using discrete triangles), and of course sort batches outside - but granted this would only work with non-overlapping vertex batchesDeano said:If you send a single triangle per DMA packet direct to the GS on PS2, performce would suck badly, also the state change operation is much more expensive than it was on PS1 so even more packets would be needed than PS1.
DeanoC said:The Painters algorithm is a valid (and fairly good) form of HSR (hidden surface removal)
Josiah said:DeanoC said:The Painters algorithm is a valid (and fairly good) form of HSR (hidden surface removal)
sure, if you don't mind drawing unseen polygons, which kind of defeats the point of HSR...it's kind of a misnomer, you're not "removing" hidden surfaces, you're just drawing over them.
BTW: perhaps Intel should start publishing framerate numbers of Quake 3, as those would be more indicative than 3,06 GHz etc...
that PC-Engine is argueing that spec sheets are near useless for the consumer is understandable (Teraflops? what that's?).
I sure wish nVidia and ATI would tell us what framerates we can expect from undeveloped games when they release information on their next chipsets, rather than useless things like fillrate and capabilities... I mean, how are we able to judge when they just keep hyping things like that?
PC-Engine said:BTW: perhaps Intel should start publishing framerate numbers of Quake 3, as those would be more indicative than 3,06 GHz etc...
Intel doesn't know what GPU their CPUs will be paired up with or how much RAM is in a typical PC![]()
that PC-Engine is argueing that spec sheets are near useless for the consumer is understandable (Teraflops? what that's?).
Exactly...the average consumer, not the tech geeks![]()
I sure wish nVidia and ATI would tell us what framerates we can expect from undeveloped games when they release information on their next chipsets, rather than useless things like fillrate and capabilities... I mean, how are we able to judge when they just keep hyping things like that?
ATI and Nvidia don't manufacture complete PCs or consoles nor does Intel, read above![]()
BTW 66 Mpolys/sec is not fillrate![]()
Microsoft, Sony and Nintendo don't know how the developers will write their games.
PC-Engine said:Microsoft, Sony and Nintendo don't know how the developers will write their games.
I'm pretty sure SONY knows how much realworld performance it's own console designs can deliver...
Don't they own the developers of GT4?![]()
You think they have no clue? You think they don't have some kind of benchmarking software that includes textures, lights, physics?
Some people are just so gullible or just smoking something hallucinogenic![]()
Considering you have defined absolutely no context for "realworld performance", I would say no, they don't.I'm pretty sure SONY knows how much realworld performance it's own console designs can deliver...
This is another point - as you pointed out yourself 'GT4'. Clearly everyone involved knows much more about the so called "realworld" behaviour of the hw today then they did on in spring 99 when the hw was first shown to the world. Even working within a defined context, the estimates back then would have been far less accurate then if someone made the exact same estimates today.Don't they own the developers of GT4? You think they have no clue? Some people are just so gullible or just smoking something hallucinogenic
maskrider said:PC-Engine said:Microsoft, Sony and Nintendo don't know how the developers will write their games.
I'm pretty sure SONY knows how much realworld performance it's own console designs can deliver...
Don't they own the developers of GT4?![]()
You think they have no clue? You think they don't have some kind of benchmarking software that includes textures, lights, physics?
Some people are just so gullible or just smoking something hallucinogenic![]()
What kind of benchmark numbers do you want that will be meaningful to an average consumer ?
e.g. Halo 480p at 30fps ? PGR at 480p at 60fps ?
An average consumer don't give a damn on what is 480p, they don't give a damn on what is 60fps, they want to play games.
Bungie was owned by MS, so MS should release a number like X-BOX can do Halo at 60fps, what does that mean to you when you don't play Halo ?
Same case in Sony, if they tell you GT4 at 60fps, what does that mean to you or an average consumer ?
And so for Nintendo, does 'SMS at 60fps' mean anything to you if you don't play SMS ?
Don't just argue for the sake of arguing.
BTW, just saw your edit. So you think 6-12MPoly/s is better than peak 12MPoly/s alone ?
If a developer has written some shit code and the game is only able to process 3MPoly/s, then will you claim the released spec to be wrong ?
If so, then 0-12MPoly/s should be a better released spec as some developers may not need to process that much polys or their code just can't process that much polys but may suffice for their games.
I rest my comment.
Considering you have defined absolutely no context for "realworld performance", I would say no, they don't.
Or yes, they do, and so does any random person making up any random number in an arbitrary context that makes that number realistic.
This is another point - as you pointed out yourself 'GT4'. Clearly everyone involved knows much more about the so called "realworld" behaviour of the hw today then they did on in spring 99 when the hw was first shown to the world. Even working within a defined context, the estimates back then would have been far less accurate then if someone made the exact same estimates today.
Not to mention the fact average consumer has very little grasp on what "xx" polygons means regardless of number being realistic or not.
In that sense any benchmarks are useless, and the only thing that really gives actual idea to the consumer what to expect would be audio/visual presentations... but then that's what we have tech demos for...
And at least this generation (I'm not all that familiar with older stuff) they've been a pretty good gauge of what early titles turned out like.