Vince said:
Tagrineth said:
If it's all public... why do nVidia and ATi's implementations both differ so wildly?
Why do they have such drastically different performance characteristics?
Because they are design implementations, they impliment what? Um, theory. From where...
I mean, what do you think people present at conferences like SIGGRAPH or IEEE or GDC? What do you think the research establishment at universities does? Do you think nVidia and ATI do it all themselves? I mean, what are you thinking?
Ah, but why does everyone else's implimentation suck so horribly? With the notable exception of 3dlabs, but I doubt they'd license SuperScene to Sony.
Matrox put forth a valiant effort with 16x FAA, but that implementation fizzled pretty badly most of the time. A decent Wu algo would probably work better most of the time...
Matrox's Parhelia-512 was an embarrassment, and it was some five years in the making, too. And Matrox had access to just as many public docs as Sony does.
And apparently, Volari and DeltaChrome use pure supersampled AA (GeForce2 / Radeon R6 tech, PS2 is capable of it, this is a pure public domain method)... and one of the two (can't remember which) isn't capable of AF full stop (it just modifies LOD if you turn it on).
WTF? Where was Z3 presented? Siggraph '99. And thats just off the top of my head from someone not in the industry. There is a massive collection of publications and research on information theory and AA.
Okay? So why do Volari and DeltaChrome, very new cores whose parent companies certainly had access to this same presentation, have such horrid implementations? Please explain THAT to me.
Theory != practice, and in today's tech world, stepping on other companies' patents is a constant danger.
In theory, Sony could've included at least EMBM or DOT3 bump mapping, or something along those lines, in GS. Those two technologies were pretty widespread by then, wouldn't you agree?
GSCube = PS2 graphics with a fucking ludicrous amount of extra polygons. No amount of parallelism will add features to your graphics pipeline, until you bite the bullet and run a software renderer - at which point the issue is moot.
Hey, who in that other thread questioned Faf when he said that this place is going to be a vernerable hell-on-earth if Sony were to go Micropolygon. Well, this is what you're going to get, times the other 500 members who won't spend the time to learn about it
Sony tried to go with mass polys this gen. And what did we get? A lot of really bad looking games. A few good looking ones, but sadly, even the PS2's best looking games don't quite stack up against GameCube's best visuals (except from an art direction viewpoint - and if you want to go THAT far, we could bring in some Dreamcast examples too! And besides, art direction falls under aesthetics which can't really be measured too well)
GeForce256 and Radeon R6 both have basic pixel shaders (nVidia Shading Rasteriser and Charisma Engine, respectively). PS2 has alpha blending. WOOP.
WOW, NSR and Charisma... shit, that sounds really powerful. I bet it got used in every title developed during that period! I also bet the games from 2000's hardware is just soo much better looking than PS2... ohh, wait, no.
I never said they were used. In fact, offhand I can't name a single game that used either of them. The technology was in place, though.
And the games from 2000 didn't
use the hardware - AS YOU SO KINDLY POINTED OUT YOURSELF. Hell, compare Xbox's visuals to those of games from GeForce3's launch. Games that actually USE the hardware to any serious, low-level extent, DO end up looking a whole lot better. Just like PS2's graphics do end up looking quite a bit better than anything that was achieved during Voodoo/Voodoo2/Voodoo3's time, despite the similar feature set.
Some people actually prefer well-done pixel operations and texturing to fifty billion extra polygons - some people prefer Dreamcast's graphics to PS2's for this very reason.
Some people also remember what the GeFORCE256 SDR was capable of. And we still shivver in horror, only made worse by JVD's attempt at making it into something other than a horrible product.
DOOM3 technically uses (or at least at one point used - and hasn't changed much visually except for things like HDR) GeForce256's feature set as a base.
Granted, it won't run well on a GeForce256 (partly because of abstraction due to OS, AGP bus, driver interface, general programming, Carmack's severe abuse of five hundred passes per pixel)... but it's the technology.
Judging by the screenshots of "Death Jr" or whatever, PSP has a long way to go as far as pixel maths go. It looks about halfway between N64 and Dreamcast - wow, it has texture filtering and alpha blending! WOWIE!
Funny girl. Still missed the point, if ATI and nVidia are so much further than the rest of the industry, like Sony and PowerVR, why are their portable 3D alot worse than PSP and it's
"halfway between N64 and Dreamcast" graphics?
Because the portable sector has never been their focus, and demand for high-performing 3D in the hand-held market has been absurdly small until very, very recently. Look at PowerVR MBX, though, if you want to see hand-held graphics from a PC sector dev who actually dared enter the market.