Why does Sony create such wacky architectures?

I think it was release in the 2nd or 3rd week of December. I dont know how the PA is used for, that is why i am asking here. :?
We have no idea whether Getaway used the PA, but explain Primal. :)
 
WHY are you people even trying to get something into little chaps' head?

he's hopeless.... :LOL:

just give up and ignore him, it's pretty clear he has no idea whatsoever about what he's talking about.... and he's not interested in real information from competent people. he's not worth anyone's time on here.
 
Struth, what a lot of traffic!
zidane1strife said:
PS2 supports ALL HDTV standards in the specs,
I?m curious as to how it would achieve that. AFAIU, it has 4Mb of storage. I was under the impression that half could be used for framebuffers and the other half for textures (Faf, please correct me if I?m wrong). If we double buffer, use 16 bpp for images and a 16bpp Z buffer, that allows for ~330 k pixels. How do you get HDTV res with that?

Panajev2001a said:
I doubt in 2001 you could have fit Naomi 2, with all those RAM pools ( texture, models, main RAM ), two GPUs in parallel under $300...
But you wouldn't. For a console, you'd take time to fit them all into a couple of chips. As I said before, I think the silicon area of PS2 was still larger than that of N2 even with it spread over several chips.

zidane1strife said:
Hmmm, what are the stats on the naomi2?
From what I've heard previously it's top on lights mostly... but when you take that out gpus like that in the xbox can eat it for breakfast in flexibility and features... Or is that wrong?
The conservative (i.e. quite a bit below the max spec) figures quoted are 10MT/s with 6 fully-featured lights (i.e. not just a dot product!). None of the DX7 chips that have been released and possibly no DX8 chips either would come close to performing what Elan was designed to do. Of course, DX8 chips are more flexible in what they do, but that flexibility comes at a performance price. BTW ?lights? on Elan are actually more generic things and can be used for other interesting purposes.

How does it fare in the areas of shadows..
AFAIAA Elan is the only hardware system that can perform correct clipping of shadow volumes...
Or in the various motion blurring, heat haze, and other effects like that?
DC can access the framebuffer so developers could do those sort of things if they?d wanted.

I dunno much about it
you said it ;)

marconelly! said:
I said it beats Xbox, GCN, and PS2 in lighting. This from technology that's even older than PS2
You must be joking? N2 AFAIK doesn't support per pixel lighting.
Gosh, I?ll have to go an undo all my design work. No, DC was perhaps the first commercial HW to have per-pixel perturbed normal bump mapping. Elan added some extra features to make that even easier to use.
I'd be extremely surpised if it could render full scene bumpmapped game (does it even support bumpmapping in hardware),
Yes it could. Don?t confuse the fact that the first and second generation games barely scratched the surface of DC?s facilities. Just because another console got dumped on the market at a huge discount does not mean it is not possible. I suggest you have a search for the old thread that discusses the differences between the PVR and probable 3dfx DC solutions.

Besides that, if N2 games released so far are anything to go by, effects like DOF, motion blur, heat haze, particles, etc, were either non existing or very sparse.
See previous.

Panajev2001a said:
Naomi 2 supports loopback ???
Sorry but Naomi 1 was based on the same PVRDC chipset the DC used and Naomi 2 uses two of those together with a T&L dedicated chip ( Elan )...
PVRDC does NOT support loop-back...
That is correct, however PVR2 has two internal scratch/accumulation buffers so you can do some 'interesting' things with multi-textures.

marconelly! said:
Actually it does support bumpmapping and perpixel lighting
Yeah, just like Dreamcast 'supports' them. Doesn't mean it can really be done in the game. There's just no chance N2 would be able to render full scene bump (normal) mapped multitextured game with per pixel lighting on *everything*.[/quote]
Yes exactly like DC supports them... in fact, even more so. DC could easily apply bump mapping to everything.

Sonic said:
Simon, thanks again.
No worries! Nice to see someone sane posting.
All the bump mapping I've seen N2 do is limited to 2 - 3 objects on screen at most, even thoguh the framerate stays the same. I can't imagine all that time to be spent doing bump mapping could be done for the whole screen.

Hey Simon, how many passes or clock cycles does it take the CLX chip to bump map?
1 clock, i.e. the same as standard texturing. In other words, it's not an issue to have everything bump mapped if you wanted it.
Lazy8s said:
Sonic:
But this situation has changed for the most part, I'm even starting to see PS2 games with textures that look better than even the best Dreamcast games graphics wise.
It doesn't help DC's cause that original development for it got frozen somewhere between second and third-generation software.
Agreed!
And I doubt ELAN was too much more than $30 a pop in quantity...
No comment ;)

Crazyace said:
Things seems to have gone off track a bit..
I dont think that Naomi 2 would have been viable consumer hardware.. It was much much of a 'power' machine, using duplicate renderers
And the amount of silicon area in the PS2 wasn't?!
Perhaps the most interesting comment about the quality comparision comes directly from the VGA display compared with the TV display. This moves things away from the target market - after all only the truly hardcore ( in a mass market ... ) DC owners would use it on a VGA monitor ( I'm one ).
It's just as valid for (decent) televisions as well. DC (usually) renders at frame resolution and then does on-the-fly downfiltering to produce the fields. This makes a big difference to the quality.
if you want to play whatif - whatif the PS2 supported VQ textures ( a simple engineering task to modify CLUT access )?
It's not "a simple engineering task" at all! The VQ textures used a second cache for the vector table. It would be rather expensive to spend time uploading a new texture LUT everytime you changed texture... but then of course you are forced to do that on ?some? systems.
and full opengl colourblending,
[sarcasm]gosh, why would anyone want that?[/sarcasm]

Almasy said:
Lazy, I cannot put into words how much your useless, poetic multi-page rants that cover a single point annoy me. You know there´s a little writing "technique" that you should have learned earlier, and it´s called being concrete.
Of the posts I?ve read so far, I thought Lazy8?s seemed quite sensible. I don?t normally go in for flaming people, but that?s more than can be said for some.

That?s it. I?ve wasted enough time on this thread and my lunch hour has vanished.
 
I dont remember seeing BM in any DC games i played???
The fact that Sega added some BM in the Xbox port of Shenmue 2(i hear they did) says something ya?

Doesnt the DC has like very little fillrate or bandwidth or whachacall that?

Add motion blurring too. The only game i recall having MB is DOA2 final boss and it looked harsh compared to what we have today.....

And yeah, whats with the PS2-ish DC still has untapped powers thing too? :LOL: I mean, the system has been around for 2 years and according to lazy, it has been programmed by the best of the bunch: SEGA. So eh, untapped? I see games like Shemnue, SA and Test Drive: LM pushing the DC hard. yes yes?? :oops:
 
Simon F said:
Struth, what a lot of traffic!
zidane1strife said:
PS2 supports ALL HDTV standards in the specs,
I?m curious as to how it would achieve that. AFAIU, it has 4Mb of storage. I was under the impression that half could be used for framebuffers and the other half for textures (Faf, please correct me if I?m wrong). If we double buffer, use 16 bpp for images and a 16bpp Z buffer, that allows for ~330 k pixels. How do you get HDTV res with that?

Ok, so I’m not Faf, but AFAIK the 4MB of ram on the GS is unified, you can use it however you like. So, it has enough ram for one frame at 1920x1080x16 (4050KB). If you use field rendering, and no z buffering and only very small textures (no more than 46KB per texture), you could actually make a game running at 1920x1080i... :)
 
Thowllly said:
Ok, so I’m not Faf, but AFAIK the 4MB of ram on the GS is unified, you can use it however you like. So, it has enough ram for one frame at 1920x1080x16 (4050KB). If you use field rendering, and no z buffering and only very small textures (no more than 46KB per texture), you could actually make a game running at 1920x1080i... :)



And who would like to play a game like that.....
 
I?m curious as to how it would achieve that. AFAIU, it has 4Mb of storage. I was under the impression that half could be used for framebuffers and the other half for textures
Actually the memory is unified, so it can all be used for either. As for higher HDTV resolutions, noone said they were particularly usefull for games. (it'd indeed take some exotic approaches to cram in 1080i with Zbuffering and all, although it's not impossible).
GS supporting HDTV and high vesa resolutions is more a legacy of it originally being designed for applications other then gaming. (much like I imagine DC's PVR carries higher VGA support because of it's PC legacy...?).
On the flipside, the much newer Flipper doesn't go beyond 640x528 - it was designed for the target market only, without unnecessary redundancies.

Elan is the only hardware system that can perform correct clipping of shadow volumes...
I'd think the question was more regarding the performance issues. I have no clue about modifier volume performance characteristics, but stencil volume drawing is almost always fill limited, even with really high fillrates of over 2gpix (and in 640x480 that is).

It's not "a simple engineering task" at all! The VQ textures used a second cache for the vector table. It would be rather expensive to spend time uploading a new texture LUT everytime you changed texture... but then of course you are forced to do that on ?some? systems
I don't see why that would be a problem... you have Cluts sitting in eDram, reloading cost per texture is negligeable at most, and it's pretty much what happens on PS2 already.
Besides, if I can make the existing GS to unpack VQ in 2passes, it 'shouldn't' be a major engineering task to modify the hw that much :p
 
Fafalada said:
I?m curious as to how it would achieve that. AFAIU, it has 4Mb of storage. I was under the impression that half could be used for framebuffers and the other half for textures
Actually the memory is unified, so it can all be used for either. As for higher HDTV resolutions, noone said they were particularly usefull for games. (it'd indeed take some exotic approaches to cram in 1080i with Zbuffering and all, although it's not impossible).
Thanks for clearing that up. I must have misread (more likely mis-remembered) something in the past.

Elan is the only hardware system that can perform correct clipping of shadow volumes...
I'd think the question was more regarding the performance issues. I have no clue about modifier volume performance characteristics, but stencil volume drawing is almost always fill limited, even with really high fillrates of over 2gpix (and in 640x480 that is).
Modifier volumes are quite a bit more efficient than plain stencils because the (non volume) objects that get modified have "two" shading options for the bits that fall inside and outside of the volumes. This cuts down the amount of fill needed. The volumes themselves take the same sort of fillrate as stencils but this gets done in the ISP (i.e. very high fill rate) part and can carry on in parallel with the texturing. Obviously, you can't afford to go completely mad with them! :)

It's not "a simple engineering task" at all! The VQ textures used a second cache for the vector table. It would be rather expensive to spend time uploading a new texture LUT everytime you changed texture... but then of course you are forced to do that on ?some? systems
I don't see why that would be a problem... you have Cluts sitting in eDram, reloading cost per texture is negligeable at most, and it's pretty much what happens on PS2 already.
Besides, if I can make the existing GS to unpack VQ in 2passes, it 'shouldn't' be a major engineering task to modify the hw that much :p
[/quote]
Ok. I assume you don't mean decompressing the texture into the EDRAM as you'd not fit all that much in, so I suppose you'd apply the indices to the object and then come back and use those to access the LUT. Can you do dependent texture reads in the GS?
 
Modifier volumes are quite a bit more efficient than plain stencils because the (non volume) objects that get modified have "two" shading options for the bits that fall inside and outside of the volumes. This cuts down the amount of fill needed. The volumes themselves take the same sort of fillrate as stencils but this gets done in the ISP (i.e. very high fill rate) part and can carry on in parallel with the texturing.
Thanks for the info, I always figured the efficiency was better, just didn't know quite how much the difference really is.

Obviously, you can't afford to go completely mad with them!
I don't think you can quite do that on any current hw yet. Although I guess it depends on definition of crazy. ;)

Ok. I assume you don't mean decompressing the texture into the EDRAM as you'd not fit all that much in
Er, actually that's exactly what I meant. It'll depend on what configuration you use for eDram in your app, but I'm used to having around 1mb working buffer.
Thing is however, two ops/texel means you actually get more of effective texture data per frame in standard 8 or 4bit, but the upside is less main memory use, which I found much more often a problem. But yeah, in the end it's not particularly useful as a general solution (or at least, there are better alternatives).

so I suppose you'd apply the indices to the object and then come back and use those to access the LUT. Can you do dependent texture reads in the GS?
I think I did hear someone do a form of VQ with something along the lines of applying indices to objects, but I haven't seen that in action yet, so I'm not sure how well it works either.
And unforunately no - of course dependant texture reads would have made VQ a breeze :\
 
Simon,

I am here to call you on your bias towards the DC architecture... you obviously are over-amplifying PVDRC capabilities with that single cycle bump mapping and easy stencil-shadowing... In order to solve the trickery I demand to be provided with a trial DC dev kit...


//from, again, Monty Python's quest for the Holy Grail, I'd expect at this
//moment you and your companions throwing rocks and tons of other stuff
//from the castle walls...


Ok, that didn't work... what if I beg ? ? ?

:LOL: I was only kidding with the "offensive part of the post", BTW ;)

I am still pondering about the PS2 Linux kit, but I quite sincerely do not have $199 on the "free to spend" budget available right now and I am in no mood of looking at warez stuff...

My memories of Libdream ( Dan Potter's DC library ) were full of "I cannot compile/insert bug at compile time of the toolkit, etc..." memories and I abandoned the research of how running self made hobby programs on DC HW...

:(

oh well... GBA ( homebrew coding ) will be the only fun I will be able to have at the moment then ;)

I will keep checking the developemnt for free DC devkits...

Here I'd politely ask Simon and the rest of the PVR team that surfs these boards if in the spare time ( if they have any ) they could help the DC homebrew coding scene: I am not saying to release to the public the DC SDK you have, but maybe to help ( Dan Potter and friends seem very competent, but more competent programmers would help the free DC SDK projects to proceed faster )...

If you refuse I have ( oh no another quote from the same moviieeeeeeee !!! ) some friends, who happen to be knights that pronounce a word you PVR/IMG employees hate, so I won't repeat it here ;)



Fafalada,

I understand... IIRC people were sustaining that GS's triangle set-up engine started to choke after the 15 MTriangles/s point... ( is it Triangles as 1 triangle = 1 Vertex ? )

Are you saying that basically using VU1 and doing some nice complex lighting and multi-texturing the rate we send triangles to the GS's triangle set-up engine is below 15 MTriangles/s ?

IF that noumber counted multi-texturing and we had like 2 textures per polygon our engine would basically be rendering a max of 7.5 MTriangles/s ( if the 15 M number was vertices/s then this one is too ) or 125 KTriangles/frame

If an architecture were to work with single-pass multi-texturing our 15 MTriangles/s could be almost all effective triangles used in the scene and our polygon budget per frame would indeed grow to 250 KTriangles/s...
It seems to me that even on current PS2 HW single-pass multi-texturing could have improoved performance...

Of course if our effective polygon budget/frame doubles then the risk we will be main memory limited will grow too as we will need to have more data to be stored in main memory...

BTW, you mentioned 2 passes VQ... well, if main-RAM free space is the issue and you are already using 2-3 rendering passes in your engine already would not that give you VQ de-compression for free ?

If VQ textures can effectively provide higher compression ratios in the engine used compared to already available 4-8 bits CLUT ( in average, it might depend from texture to texture ), this will also mean that in average transferring texture data from disc to main RAM, from main RAM to GS will take less time and less bandwidth...

Think about an engine whcih is doing multiple rendering passes and has some VRAM left, but not using VQ... think about adding VQ (basically for free ), this would decrease the amount of bandwidth texture data steals from EE's main bus and from main memory and from the GIF-to-GS bus improoving overall system efficiency...

Using any of the paths, texture data does indeed travel onto EE's FSB and in that "period" the bus is busy and cannot be used by the VUs or the RISC core... if we can decrease that "texture transfer period" even by a small amount, that would help overall system performance by increasing average efficiency as we would be able to keep the execution units a little bit better fed... all IMHO...
 
Simon F, I must say that your assessments of DC hardware sometimes really sound, shall I say... a bit too optimistic. Full scene bumpmapping on DC? What is that scene going to have, exactly? As you can see, even the Xbox, with it's graphics hardware that is unquestionably more apt at performing a function like that, is struggling to reach even 30FPS in a game that requires everything to be normal mapped (see Deus Ex 2 thread, and Halo 2 author's comments from EGM).

As for those post processing effects such as DOF, etc. I know DC *could* do them, but for whatever reason they always were extremely sparce and not so good looking.
 
Marconelly... on DC, I think this is also part of Simon's point, the PVR would not be THE limiting factor of the platform... maybe the SH-4 would be first...

You could do fast and correct stencil-shadowing ( from what he says the performance hit is not horrible ), but generating the shadow volumes will be a pain and that will have to be carried by the CPU which also has to carry T&L, physics, A.I. etc... the FPU will not be available while the Vector Unit is processing data...

You can do bump-mapping and even if the basic fill-rate is what it is you have , only 100 MPixels/s IIRC although you are discarding pccluded geometry ( best case: only opaque overdraw... DC is not strong in transparencies as PS2... )... but normal bump-mapping is only 2 textures: base texture + normal/bump map... and this brings 2 clocks/rendered pixel... 50 MPixels/s... add in at least 1 more cycle ( can we do EMBM in 3 cycles on the PVRDC/PVR2 ? )... which would bring us to 33 MPixels/s

640x480 at 30-60 fps = ~9-18 MPixels/s

And we have not factored in transparencies and we are thinking about almost perfect effieciency forgetting about display list size issues and other nasty problems ;)

There are ways to slow DC down, but there are ways to push it...
 
marconelly! said:
Simon F, I must say that your assessments of DC hardware sometimes really sound, shall I say... a bit too optimistic. Full scene bumpmapping on DC? What is that scene going to have, exactly? As you can see, even the Xbox, with it's graphics hardware that is unquestionably more apt at performing a function like that, is struggling to reach even 30FPS in a game that requires everything to be normal mapped (see Deus Ex 2 thread, and Halo 2 author's comments from EGM).

As for those post processing effects such as DOF, etc. I know DC *could* do them, but for whatever reason they always were extremely sparce and not so good looking.

Simon's comment wasn't referring to Halo 2. It was referring to bumpmapping being doable ingame. Your original argument was that N2 couldn't do Halo 2 at 30 fps which I see no reason why it can't.
 
traffic response...

Hi Simon,

HDTV res may not the the most practical use of the GS, but it is possible - there was a good demo of 1080i mpeg playback at E3 last year or the year before... To use it for 3D pratically you almost implement a tile based rendering solution... ( or you use the 32M version ;) )

If Naomi2 had been shrunk to a single chip solution ( ELAN + 2xCLX or single faster kyro ) then it would probally compete very well. However EE is quite small ( 10.5M transistors ) and most of the GS's 42M transistors are in the embedded ram.. and in the end Sega didn't go for a single chip.. only the arcade market... ( I like the PVR approach though, and the Mobile chipsets look nice )

To match 10MT/s with 6 attenuated spotlights I need both VUs on the PS2, so I doubt that many games would use that - most of the time you can get away with prelit or area lit in games.. It's a pity more info didnt come out about ELAN..

It did always surprise me that there wasn't more use of bumpmapping on DC titles. Everyone seemed to pick up on modifier volumes for shadows ( but not really the other more funky uses of it.. ) but the only title I had that seem to use bumpmapping flickered as if there was a strange z fighting problem.

The nice thing about the PS2 GS, which people always forget when counting transisters, is that there is no external ram. This saves cost further down the line.. Sure it seemed huge when it was announced but it was factored for a long lifespan. For the rendering of transparencies ( something that slowed the PSX down ) it is a good solution.

My comment about the VGA was more about the connector. I actually found it annoying when I ran DC games with field buffers that wouldnt display on VGA ( I think incoming was one ) - The downfiltering on the DC was very nice though...

I think implementing VQ is a simple engineering task, especially on a deferred renderer where there index fetches can occur first, and a cache can collapse the memory accesses. It is no more difficult that any other indirect access. - Making VQ extremely efficient though may be more difficult :)
Sometimes I would love full opengl colour blending, but I dont think the end user cares...

Sometimes the view that the GS is just a turbocharged PSX gpu, although very simplistic, does have a grain of truth. I'd guess that Sony concentrated on the features that developers wanted at the time ( I still remember reading about PSX games implementing fogging and rain as full screen semitransparent sprites.. a big consumer of fill rate at the time.. )
I can understand why there is no full colour blending in terms of the transistor cost, but given the choice tween reduced fillrate / better blended or higher fillrate I'd always go for the latter.

Panajev: Effective use of triangles is always the key. There is a PS2 linux demo that transforms and draws >32M triangles. - It is strangely unimpressive though, as it isn't really art led...

The nice thing about the VU's is that although they are less powerful than dedicated HW pipelines they are completely flexible in ways that probally wont hit the PC until DX10. It is trivial to switch lighting on or off dynamically, or calculate bounding box culling around geometry before lighting, or even generate extra geometry dynamically for particle effects / surface generation or even subdivision.

At the end of the day the PS2 doesnt have the paper performance of the Xbox, or even the Gamecube, but it still holds up well when coded well. In the same way the DC would still generate good looking games today. The simple truth is that the end user nowadays has difficulty seeing a doubling of polygon budget.. The difference between 3000poly and 6000poly figures is so much more subtle than between 300poly and 600poly...

I'd reckon that if you used nice complex lighting and multitexturing your setup may drop below 15M on an Xbox...
 
Sometimes the view that the GS is just a turbocharged PSX gpu, although very simplistic, does have a grain of truth. I'd guess that Sony concentrated on the features that developers wanted at the time ( I still remember reading about PSX games implementing fogging and rain as full screen semitransparent sprites.. a big consumer of fill rate at the time.. )

I can see what you mean, fill-rate and rendering efficiency ( you need the bandwidth to support that fill-rate ) were important features for PSX/soon to become PS2 coders and so was more programmability on the T&L side: with PS3 I am positive Sony has understood the importance of more flexibility for the pixel/fragment pipeline and adding that to the RAW performance PS3 should also bring will make an interesting combo...

I am not saying "Xbox 2 is DOOOOOMED", I am just saying PS3 should still be interesting... wether it can do 2 MVertices/s more or less than Xbox 2 :)

I can understand why there is no full colour blending in terms of the transistor cost, but given the choice tween reduced fillrate / better blended or higher fillrate I'd always go for the latter.

What do you mean ? You'd choose:

1) high fill-rate

or

2) better color blending capabilities with lower fill-rate
 
Your original argument was that N2 couldn't do Halo 2 at 30 fps which I see no reason why it can't.
Because, apparently, it's not a walk in the park on Xbox, a platform that is much more apt in doing per pixel operations, than N2 can ever hope to be. You seem to insist that N2 is capable of everything Xbox with it's NV2X can do, and at the same speed, although even on paper it's obviously not the case.
 
Back
Top