Pssst... PSP... psst... Pixel Shading... psst

Wasn't the purpose of the PCMCIA supposed to be for an interface with the HDD? In that case the US unit is a better design with the in-case bay, but still it's not "useless" if it has a function.
 
PC-Engine said:
6-12 textured + lit Mpolys/sec ingame with AI, physics, etc. has more useful information than 66 Mpolys/sec untextured + lit non ingame numbers ;)

It's pretty obvious that Nintendo's numbers are more realistic if conservative compared to SONY's. Wouldn't it be funny if Nintendo were to go back and change their numbers to 70 Mpolys/sec gouraud shaded non ingame polys? :LOL: ;)

This isn't about how realistic estimates are - it's about delivering specs of what the system is capable of, under certain circumstances. I'd rather have theoretical max performance numbers and discuss those, rather than speculating over what "6-12 textured + lit Mpolys/sec ingame with AI, physics" means and how far that AI is actually pushing it to still allow for those 6 to 12 million textured + lit polygons to be drawed.

Give me facts over estimates anyday.

BTW: perhaps Intel should start publishing framerate numbers of Quake 3, as those would be more indicative than 3,06 GHz etc... :rolleyes:
 
that PC-Engine is argueing that spec sheets are near useless for the consumer is understandable (Teraflops? what that's?).

but yes for tech savies is a pretty good indicator of what is possible on these products.
 
I sure wish nVidia and ATI would tell us what framerates we can expect from undeveloped games when they release information on their next chipsets, rather than useless things like fillrate and capabilities... I mean, how are we able to judge when they just keep hyping things like that? :p



Heh. Now granted neither of them release specs way that far in advance (since they're careful of each other first and foremost and don't want to give away any maneuvers) and more than likely don't KNOW for sure from a year+ from launch, but no matter when Company X releases specs for Product Y there is no way to tell for sure. So either everything is "hype" or you learn how to put it all in perspective. I'm sure some developers find the info released as soon as possible to be useful, and others find it interesting anyway, and everyone else can take it as they may. Even benchmarking is considered "hype" for most products, since it can be done in ways that very much favor one product over another, and the products exist and are running. So for those who reserve judgement, they're going to be doing so all the way until launch and checking out as many reviews and reports as possible, so why bitch about it now? It's not like the circumstances will change at all until then or for any other company out there.
 
DeanoC said:
The Painters algorithm is a valid (and fairly good) form of HSR (hidden surface removal), the PS1 was fairly unique in having hardware acceleration of it. The ordering table hardware would still be handy today in a few situation. The GTE also rocked... best true coprocessor (rather than seperate co-CPU like device) ever in a console/computer.

Not using a Z-buffer saved a fair chunk of VRAM (128K) and a lot of bandwidth, also the subdivision used for perspective correction also helped reduce z-fighting errors.


Hold ye horses! If PSone has an ordering table, that means PS2 must have it too (for emulation purposes)! Wouldn't that in turn mean that you could do cost free polysorting and thereby be able to use the hardware antialiasing, that there was so many complaints about when PS2 was launched?

I realise that above most likely is BS, so that leads me to ask the follow-up question:
How much does it cost anyway to Z-sort polys in software? I can see two major advantages to that: 1, it would be easier to do occlusion culling and thereby saving a lot of fillrate and 2, it would be possible to have the build-in edge AA turned on always. Am I right?

In the Kill Zone thread it was mentioned that ray casting was being used in the game. Could this be for occlusion culling purposes?

And also, can PS2 still use the Painters algorithm, in hardware or software? It could be usefull in some games, where it would save the z-buffer memory.
 
And also, can PS2 still use the Painters algorithm, in hardware or software? It could be usefull in some game where it would save the z-buffer memory.

PA has issues with non-convex models I think. I guess that on chip z-support is just too fast 'not' to use.

EDIT: hhmm...maybe irregular models would've been more descriptive than non-convex -_-
 
Hold ye horses! If PSone has an ordering table, that means PS2 must have it to (for emulation purposes)! Wouldn't that in turn mean that you could do cost free polysorting and thereby be able to use the hardware antialiasing, that there was so many complaints about when PS2 was launched?
8)
I realise that above most likely is BS

Actually the ordering table itself was not without limitations (but other people in this thread can explain details about it far better then I could). The other absurd part is the idea of sending all your polygons to and from IOP, and then back to VU again...

How much does it cost anyway to Z-sort polys in software?
Far too much I'm afraid. Using triangle strips, effectively impossible, or time consuming enough that you'd be running in double figure seconds per frame.
I can see two major advantages to that: 1, it would be easier to do occlusion culling and thereby saving a lot of fillrate...
If you're thinking per poly occlusion, you just doubled the triangle strip problem.
Moreover, while this is application dependant, generally on PS2 it's a bad idea to add more load to the CPU(r59k) as it's the most likely component to slow the rest of the system down.

and 2, it would be possible to have the build-in edge AA turned on always. Am I right?
If you really want edge AA it's possible to get fairly good results with just macroscopic sorting of vertex chunks and bf culling. The most recent example of it that I've seen is J&D2, which uses it on characters in cutscenes. (maybe other parts too but I can't tell that from the demo).
 
Fafalada said:
If you're thinking per poly occlusion, you just doubled the triangle strip problem.
Moreover, while this is application dependant, generally on PS2 it's a bad idea to add more load to the CPU(r59k) as it's the most likely component to slow the rest of the system down.

Okay, but wouldn’t something like for example, Bounding Boxes or Convex Hull occluders, for view frustum culling, fit ideally within the VU1s domain (math-wise)?
 
Squeak said:
Hold ye horses! If PSone has an ordering table, that means PS2 must have it too (for emulation purposes)! Wouldn't that in turn mean that you could do cost free polysorting and thereby be able to use the hardware antialiasing, that there was so many complaints about when PS2 was launched?

Its been a few years since I worked on a PS1 renderer, so there might be a few mistakes (anybody who spots one, feel free to correct me).

The heart of the PS1 renderer was a DMA chip (just like the PS2), it had one mode that was basically used for everything. This consisted of a packet of data for the graphics chip + a pointer to the next packet, a good old fashioned linked list. You pointed it at the head of the list and it happily chewed its way through the list until it hit the end packet. The graphics chip has various commands such has bitblt, render triangle, render quad etc. One very odd (by modern standards) was that switching texture was basically free, so each packet could happily address a totally different texture.

The render triangle packet took 3 fixed point vertices and draw them into the framebuffer, the GTE had special commands that took transformed 3 vertices and stuck them in a packet. No sharing of vertices were used, each triangle was independent. The OT usually had a fixed number of Z buckets (256 or 512 was a common number IIRC), a quick lookup and link list insert, placed the triangle in the correct place in the list.

Now the PS2 DMA controller is much more advanced but still has (largely for compatibility I think), a similar mode to the PS1. It could in thoery do exactly the same job as the PS1 did BUT the costs are different these days. If you send a single triangle per DMA packet direct to the GS on PS2, performce would suck badly, also the state change operation is much more expensive than it was on PS1 so even more packets would be needed than PS1.

So you could use an OT if you were happy with say 200,000 triangles per SECOND (thats a total guess its probably way to low) rather than the VU method of 200,000 triangles per FRAME.

Squeak said:
I realise that above most likely is BS, so that leads me to ask the follow-up question:
How much does it cost anyway to Z-sort polys in software? I can see two major advantages to that: 1, it would be easier to do occlusion culling and thereby saving a lot of fillrate and 2, it would be possible to have the build-in edge AA turned on always. Am I right?
There are better ways of doing PA on the PS2 than a PS1 style OT table (using DMA packets), but your still left with the classic issue that Painters isn't as visually apealing as a Z-Buffer. Painters can't handle the cases where there is a cycle in the back to front ordering. There are a few places its useful (Particles sorting, etc) but basically its day is gone. We have very cheap Z-Buffers, which in 9 times out of 10 are better than Painters. The 1 out of 10 case where Painters wins is almost always to do transparency (like edge AA etc), which a Z-Buffer can't handle.

Squeak said:
In the Kill Zone thread it was mentioned that ray casting was being used in the game. Could this be for occlusion culling purposes?

And also, can PS2 still use the Painters algorithm, in hardware or software? It could be usefull in some games, where it would save the z-buffer memory.

Almost all games do ray casting somewhere, but without more details it could be related to physics or light flares rather than occulusion culling (ray casting isn't a very good way of occulusion culling IMHO).

Z-buffer memory isn't that much of an issue (the gain far outways the small memory cost (256K for 640*200) ), but the Z-buffer does fail with transparency then you fall back to Painters.
 
Okay, but wouldn?t something like for example, Bounding Boxes or Convex Hull occluders, for view frustum culling, fit ideally within the VU1s domain (math-wise)?
Off the top of my head, yeah it would fit, although it'd take some work to keep your occluder data within constraints of VU memory and you'd need partial test&sort of vertex batches on CPU side.

Deano said:
If you send a single triangle per DMA packet direct to the GS on PS2, performce would suck badly, also the state change operation is much more expensive than it was on PS1 so even more packets would be needed than PS1.
hehe... it is true though that if you really needed per triangle sort badly you could do a similar type of list sort reasonably fast on VU within each vertex batch(as long as you're already using discrete triangles), and of course sort batches outside - but granted this would only work with non-overlapping vertex batches :p
 
DeanoC said:
The Painters algorithm is a valid (and fairly good) form of HSR (hidden surface removal)

sure, if you don't mind drawing unseen polygons, which kind of defeats the point of HSR...it's kind of a misnomer, you're not "removing" hidden surfaces, you're just drawing over them.
 
Yes, you are removing hidden surfaces. Since you're talking about from the prespective of the view port.
 
Josiah said:
DeanoC said:
The Painters algorithm is a valid (and fairly good) form of HSR (hidden surface removal)

sure, if you don't mind drawing unseen polygons, which kind of defeats the point of HSR...it's kind of a misnomer, you're not "removing" hidden surfaces, you're just drawing over them.

By that defination a Z-Buffer is also not HSR, often you write the same pixel several times and draw unseen polygons. Both are sort algorithms, Painters sorts per polygon, Z-Buffer sorts per pixel. Painters has an explicit sort whereas Z-Buffer is an implicit sort where only the head of the sort list is kept.
 
BTW: perhaps Intel should start publishing framerate numbers of Quake 3, as those would be more indicative than 3,06 GHz etc...

Intel doesn't know what GPU their CPUs will be paired up with or how much RAM is in a typical PC ;)

that PC-Engine is argueing that spec sheets are near useless for the consumer is understandable (Teraflops? what that's?).

Exactly...the average consumer, not the tech geeks :LOL:


I sure wish nVidia and ATI would tell us what framerates we can expect from undeveloped games when they release information on their next chipsets, rather than useless things like fillrate and capabilities... I mean, how are we able to judge when they just keep hyping things like that?

ATI and Nvidia don't manufacture complete PCs or consoles nor does Intel, read above ;)

BTW 66 Mpolys/sec is not fillrate :p
 
PC-Engine said:
BTW: perhaps Intel should start publishing framerate numbers of Quake 3, as those would be more indicative than 3,06 GHz etc...

Intel doesn't know what GPU their CPUs will be paired up with or how much RAM is in a typical PC ;)

that PC-Engine is argueing that spec sheets are near useless for the consumer is understandable (Teraflops? what that's?).

Exactly...the average consumer, not the tech geeks :LOL:


I sure wish nVidia and ATI would tell us what framerates we can expect from undeveloped games when they release information on their next chipsets, rather than useless things like fillrate and capabilities... I mean, how are we able to judge when they just keep hyping things like that?

ATI and Nvidia don't manufacture complete PCs or consoles nor does Intel, read above ;)

BTW 66 Mpolys/sec is not fillrate :p

Microsoft, Sony and Nintendo don't know how the developers will write their games.

Compaq (a PC manufacturer) doesn't know how the PC game developers will write their PC games. (replace Compaq with Dell, HP, IBM, or any PC manufacturers you want)

Intel, ATI and NVIDIA don't know how OEMs and users build the PC, and they don't know how the PC game developers will write their PC games.

Sharp doesn't know how bright the environment people will watch TV with their display devices and Sharp doesn't know how bright the movies and TV programmes will be mastered.

That will mean all the manufacturers shouldn't release numbers, just release the type (a TV, a PC, a CPU, ...) with a model number and just that. Is that what you want ? Are you a techie or an average consumer now ? :rolleyes:
 
Microsoft, Sony and Nintendo don't know how the developers will write their games.

I'm pretty sure SONY knows how much realworld performance it's own console designs can deliver...

Don't they own the developers of GT4? ;)


You think they have no clue? You think they don't have some kind of benchmarking software that includes textures, lights, physics? Do you think they can't release specs that gave a range like 6-12 Mpolys/sec?

Some people are just so gullible or just smoking something hallucinogenic :oops:
 
PC-Engine said:
Microsoft, Sony and Nintendo don't know how the developers will write their games.

I'm pretty sure SONY knows how much realworld performance it's own console designs can deliver...

Don't they own the developers of GT4? ;)


You think they have no clue? You think they don't have some kind of benchmarking software that includes textures, lights, physics?

Some people are just so gullible or just smoking something hallucinogenic :oops:

What kind of benchmark numbers do you want that will be meaningful to an average consumer ?

e.g. Halo 480p at 30fps ? PGR at 480p at 60fps ?

An average consumer don't give a damn on what is 480p, they don't give a damn on what is 60fps, they want to play games.

Bungie was owned by MS, so MS should release a number like X-BOX can do Halo at 60fps, what does that mean to you when you don't play Halo ?

Same case in Sony, if they tell you GT4 at 60fps, what does that mean to you or an average consumer ?

And so for Nintendo, does 'SMS at 60fps' mean anything to you if you don't play SMS ?

Don't just argue for the sake of arguing.

BTW, just saw your edit. So you think 6-12MPoly/s is better than peak 12MPoly/s alone ?

If a developer has written some shit code and the game is only able to process 3MPoly/s, then will you claim the released spec to be wrong ?

If so, then 0-12MPoly/s should be a better released spec as some developers may not need to process that much polys or their code just can't process that much polys but may suffice for their games.

I rest my comment.
 
I'm pretty sure SONY knows how much realworld performance it's own console designs can deliver...
Considering you have defined absolutely no context for "realworld performance", I would say no, they don't.
Or yes, they do, and so does any random person making up any random number in an arbitrary context that makes that number realistic.

Don't they own the developers of GT4? You think they have no clue? Some people are just so gullible or just smoking something hallucinogenic
This is another point - as you pointed out yourself 'GT4'. Clearly everyone involved knows much more about the so called "realworld" behaviour of the hw today then they did on in spring 99 when the hw was first shown to the world. Even working within a defined context, the estimates back then would have been far less accurate then if someone made the exact same estimates today.

Not to mention the fact average consumer has very little grasp on what "xx" polygons means regardless of number being realistic or not.
In that sense any benchmarks are useless, and the only thing that really gives actual idea to the consumer what to expect would be audio/visual presentations... but then that's what we have tech demos for...
And at least this generation (I'm not all that familiar with older stuff) they've been a pretty good gauge of what early titles turned out like.
 
maskrider said:
PC-Engine said:
Microsoft, Sony and Nintendo don't know how the developers will write their games.

I'm pretty sure SONY knows how much realworld performance it's own console designs can deliver...

Don't they own the developers of GT4? ;)


You think they have no clue? You think they don't have some kind of benchmarking software that includes textures, lights, physics?

Some people are just so gullible or just smoking something hallucinogenic :oops:

What kind of benchmark numbers do you want that will be meaningful to an average consumer ?

e.g. Halo 480p at 30fps ? PGR at 480p at 60fps ?

An average consumer don't give a damn on what is 480p, they don't give a damn on what is 60fps, they want to play games.

Bungie was owned by MS, so MS should release a number like X-BOX can do Halo at 60fps, what does that mean to you when you don't play Halo ?

Same case in Sony, if they tell you GT4 at 60fps, what does that mean to you or an average consumer ?

And so for Nintendo, does 'SMS at 60fps' mean anything to you if you don't play SMS ?

Don't just argue for the sake of arguing.

BTW, just saw your edit. So you think 6-12MPoly/s is better than peak 12MPoly/s alone ?

If a developer has written some shit code and the game is only able to process 3MPoly/s, then will you claim the released spec to be wrong ?

If so, then 0-12MPoly/s should be a better released spec as some developers may not need to process that much polys or their code just can't process that much polys but may suffice for their games.

I rest my comment.

The analogies you try to equate is pretty rediculous ;)

6-12 MPolys/sec is NOT a LAW...are you living on planet earth? It's a typical performance figure which most games on GCN fall under ;)

Are you telling me that Nintendo has a magic wand that SONY doesn't that allows them to gauge realworld performance???

WTF is 66 Mpolys/sec??? I thought this was a GAME console. Are there any GAMES on PS2 (let alone most) that pushes 66 Mpolys/sec??? Didn't think so.


Considering you have defined absolutely no context for "realworld performance", I would say no, they don't.
Or yes, they do, and so does any random person making up any random number in an arbitrary context that makes that number realistic.

Read the part about Nintendo's magic wand above :p


This is another point - as you pointed out yourself 'GT4'. Clearly everyone involved knows much more about the so called "realworld" behaviour of the hw today then they did on in spring 99 when the hw was first shown to the world. Even working within a defined context, the estimates back then would have been far less accurate then if someone made the exact same estimates today.

Not to mention the fact average consumer has very little grasp on what "xx" polygons means regardless of number being realistic or not.
In that sense any benchmarks are useless, and the only thing that really gives actual idea to the consumer what to expect would be audio/visual presentations... but then that's what we have tech demos for...
And at least this generation (I'm not all that familiar with older stuff) they've been a pretty good gauge of what early titles turned out like.

I could've mentioned GT3 too and??? What about Nintendo's magic wand??? :p

The point is that SONY knows the realworld performance figures for their own designs as Nintendo does yet SONY chooses to hype raw non ingame figures instead.
 
Back
Top