Why does Sony create such wacky architectures?

boring unrelated observation...
TTT chars average 2-3 texture passes, while DOA2 ones are often missing even the base texture... :oops:

Uhmm...
The lighting was quite nice in DoA2, and TTT was roughly similar too.
Can someone please define "lighting" for me, because I have no f#$#$ clue what some people are referring to in these arguments - at least not after I read what they have to say about it.

Oh, the obligatory opinion, so I get in spirit of the thread :
I was always under impression TTT used more polys just for floor tiles then entire DOA2 stage. :devilish:
 
That technical doc is pretty damming - 100% CPU and renderer to generate 6Mpps. The nicest thing about PS2 ( and GC and Xbox ) is that the benchmark figures in demos actually leave the CPU pretty idle in most cases... For PS2 >32Mpps can be achieved using VU1 alone for prelit backgrounds, or >20Mpps for lit and transformed. ( Xbox is even higher.. ) ( check the playstation2 linux site for some code.. ) leaving the CPU free to handle animation and gamecode..
The Testdrive guys pushed the DC hard, put they also push the PS2 hard as well, and it shows how good they are as coders more than anything else..
even the PSX pushed some good numbers through near the end... 400Kpps was possible in some engines as devs pushed harder..( The GTE throughput was very high compared to the triangle drawing capability - almost reversed for PS2.... )
 
Better yet, let's post screenshots of the DOA2 stages that (intentionally) don't use lots of textures.
As peolpe often let their nostalgic feelings blur their vision, reality check every now and then is kinda necessary.

If you compare two snow levels... there's a lot to see there except for just textures. Characters look just flat in DOA2 compared to those in TTT.
 
The Testdrive guys pushed the DC hard, put they also push the PS2 hard as well, and it shows how good they are as coders more than anything else..

PORT


Well, if you compare two snow levels... there's a lot to see there except for just textures. Characters look just flat in DOA2 compared to those in TTT.

Character textures styles in DOA2 is by choice. Some of the other stages in DOA2 shows a alot more variety in textures and polys. Anyway I think we should get back to the original topic :)
 
Define lighting....

In this case,

arbitary assignment of realtime capabilities to something which is almost completely prelit...

bit like the exclamations of 'bump mapping' deduced from viewing static screenshots, and the complete selfdelusion of viewing highres mockups.. ( Photoshopped or game engine generated... ) and declaring them indicative of final quality... ( Every platform has had guilty examples of this... )

This thread is great - It's almost like a time warp back 3 years to the Japanese PS2 launch.... ( Not quite back to the DC launch, where the Sonic runs at 60Hz, and SegaRally 2 is Arcade perfect claims flew thick and fast... )
 
as an observation....

Risc processor with specific matrix vector multiply engine
GPU with Tile based rendering, and display list buffered in VRAM
...
Risc processor with 128bit multimedia extensions
Programmable SIMD coprocessor
Programmable SIMD transform engine
GPU with embedded VRAM and simple rasterisation
...
Risc processor with paired floating point extensions
Fixed function T&L with extensive flexibility.
GPU with embedded VRAM and extensive programmable pixel control on rasterisation
...
Cisc/Risc processor with 128bit multimedia extension and SIMD floating point
Completely programmable pipelined SIMD transform engine
Programmable pixel engine
...

Which one is more exotic?????
 
Fafalada:
Can someone please define "lighting" for me
I was basically referring to the apparent number of sources and how smooth its variance was over the characters. Neither game really flaunts anything out of the ordinary resulting from lighting calcs like self-shadowing or anything (that I remember). In the taunt sequences before the fights in DoA2, you can see the characters highlighted from different directions as well as the standard ability to obscure sources (like having your back towards the sun). TTT had some nice Soul-Calibur-like spark effects in addition, and other than that looked roughly in the same league to me.

You sounded like you didn't agree, so I'm definitely open to you pointing out what I missed.

Crazyace:
The nicest thing about PS2 ( and GC and Xbox ) is that the benchmark figures in demos actually leave the CPU pretty idle in most cases... For PS2 >32Mpps can be achieved using VU1 alone for prelit backgrounds, or >20Mpps for lit and transformed. ( Xbox is even higher.. ) ( check the playstation2 linux site for some code.. ) leaving the CPU free to handle animation and gamecode..
No argument there. The PS2 was well thought-out for crunching math on lots of vertices while dividing tasks among the parts of the EE. Definitely stops you from having to make so many fundamental compromises at that stage when choosing between design decisions.

marconelly!:
If you compare two snow levels... there's a lot to see there except for just textures.
Yeah, like how the Dead or Alive 2 stage actually has a gigantic underground section with ice crystals all over to which you can fall, and not the infinitely scrolling boring floor that's in TTT.
 
Re: as an observation....

Crazyace said:
Risc processor with specific matrix vector multiply engine
GPU with Tile based rendering, and display list buffered in VRAM
...
Risc processor with 128bit multimedia extensions
Programmable SIMD coprocessor
Programmable SIMD transform engine
GPU with embedded VRAM and simple rasterisation
...
Risc processor with paired floating point extensions
Fixed function T&L with extensive flexibility.
GPU with embedded VRAM and extensive programmable pixel control on rasterisation
...
Cisc/Risc processor with 128bit multimedia extension and SIMD floating point
Completely programmable pipelined SIMD transform engine
Programmable pixel engine
...

Which one is more exotic?????

The one that's difficult to program ;)
 
Crazyace:
Risc processor with specific matrix vector multiply engine
GPU with Tile based rendering, and display list buffered in VRAM
...
Risc processor with 128bit multimedia extensions
Programmable SIMD coprocessor
Programmable SIMD transform engine
GPU with embedded VRAM and simple rasterisation
...
Risc processor with paired floating point extensions
Fixed function T&L with extensive flexibility.
GPU with embedded VRAM and extensive programmable pixel control on rasterisation
...
Cisc/Risc processor with 128bit multimedia extension and SIMD floating point
Completely programmable pipelined SIMD transform engine
Programmable pixel engine
...

Which one is more exotic?????
That's a good point. In my first message in this topic, I brought it up too that I'm not so convinced the PS2's architecture is all that exotic comparatively. As someone else mentioned earlier, being off-the-shelf doesn't preclude a design from being exotic. The design teams for these off-the-shelf parts set out with a goal of achieving the best performance possible, same as with Sony's team. If they take a new approach to getting there and the still maintain whatever compatibilities they have to on a larger system architectural scale, they can be delivering as exotic a solution as Sony does with their custom parts.
 
PC-Engine - Which one would you find more complex to program on??? ;)

Lazy8s: I think there is a fundamental misconception here. The only 'off the shelf' product might be the P3 core in the Xbox. The SH4 in the DC is a specially modified design, as is the PPC core in the Gamecube. The NVidia chipset is custom made, as is the ArtX flipper, and even the CLX was a custom design, with features not present in the PVR2 or Kyro parts..
The EE is a custom processor, building on the MIPS core licensed to Toshiba, and the GS is custom.
The only difference is that Sony build the chips themselves, so factory R&D ends up being factored into the initial costs.... but whereas NVidea/Hitachi/NEC/IBM/ATI etc profit(ed) from chip sales and design wins, all of the profit generated at each level goes to Sony...
 
Yeah, the core design was adapted to work within their specific console environment for all those licensed parts. In regards to the SH-4, I remember a SEGA engineer commenting that they had Hitachi add a powerful floating-point unit to make it more applicable to the new generation of 3D games Dreamcast would be doing. That article also talks about that selection process:

"The CPU was clearly an important part of the Dreamcast specification, and selection of the device was a lengthy and carefully considered process. Factors considered included performance, cost, power requirements, and delivery schedule. There wasn't an off-the-shelf processor that could meet all requirements, but Hitachi's SH-4 processor, which was still in development, could adapt to deliver the 3D geometry calculation performance necessary. The final form has an internal floating-point unit of 1.4 Gflops, which can calculate the geometry and lighting of more than 10 million polygons per second."

The first page of the article also talks about the philosophies and goals for the design on a whole, so there's some insight there into why they selected the partners and parts they did:

http://www.computer.org/micro/articles/dreamcast.htm

Sony taking control of the technology at all stages is definitely a wise move if they're confident high volume demand will be there.
 
Ah

That's obviously the Gamecube, where only inline assembly and tight control of the cache give good performance in softskinning, or the Xbox where vertex shader assembly and micromanagement of the pixel pipeline are needed... or even the Dreamcast, where direct control of the CPU store buffers is needed to get efficient data transfer to the CLX chip, and rendering loops still have to be in assembly for performance.

:rolleyes:

The only machine that gives adequate performance without complex programming is the PC, as it is a moving target that is almost never utilised fully. To achieve impressive performance on any console requires 'getting your hands dirty'. Libraries and Middleware may make it easier to start with, but to compete on a fixed platform you want any advantage you can get... It will be interesting to see the Xbox games try to keep up with the DX9 class PC ports later in its life - I'm sure there'll be some amazing looking games that will look comparable to titles on PCs running 4-5 times faster... ( baring inevitable comparisions to 1600x1200 VGA displays etc etc etc...... )
 
Time and resources aren't necessarily free and the key word was adequate performance. Of course all architectures would benefit from low level coding and micromanagement ;)
 
very true that time and resources aren't free... However given the competition in the console market, where the platform is fixed, investing the extra time and resources can be worthwhile... Once you have produced a 'state of the art' engine you can then proceed to milk it ( the Carmack effect... ;) )
 
Crazyace said:
very true that time and resources aren't free... However given the competition in the console market, where the platform is fixed, investing the extra time and resources can be worthwhile... Once you have produced a 'state of the art' engine you can then proceed to milk it ( the Carmack effect... ;) )

True..lucky for Carmack FPS are VERY popular. 8)
 
Re: PORT

PC-Engine said:
Crazyace said:
Sorry PC-Engine, not talking about the TestDrive port, but their original PS2 work...

Well according to the SONY boys, it was the lack of docs ;)

nice jab man ;)

well according to Hannibal from ars' technica lot of developers complimented him because that was the best English PS2 documentation ( for introduction to the quirks of PS2 architecture )...

According to Volition guys when they started development of Red Faction and Summoneer msot of the documents were in Japanese and they could not really understand much of it...

According to numeorus reports, several developers lamented the lack of good high level libraries to start development...


Crazyace,

you said that the GTE was faster drawing polygons than PSX's GPU drawing rate and PS2 is almost the opposite in this regard ( if I understood correctly )...

I thought that Triangle Set-up would have limited polygons/s figures before we became T&L limited... even assuming 75 MVertices/s, VU0 and VU's max is well above 80 MVertices/s ( of course they would not fit in the GIF-to-GS bus, still I thought that Triangle Set-up came to be a bottleneck before T&L... )
 
well according to Hannibal from ars' technica lot of developers complimented him because that was the best English PS2 documentation ( for introduction to the quirks of PS2 architecture )...

According to Volition guys when they started development of Red Faction and Summoneer msot of the documents were in Japanese and they could not really understand much of it...

According to numeorus reports, several developers lamented the lack of good high level libraries to start development...

So how was Madden released shortly after launch?
 
Back
Top