Could Dreamcast et al handle this/that game/effect? *DC tech retrospective *spawn

What you describe is still impressive and makes me just as curious. Whole development houses avoided using it even though it was in the documentation because it was too difficult. Yet only handful of people managed the impossible port by taking advantage of it. What else is in documentations that devs barely used or not used at all? And how did a handful of people in their spare time manage such incredible feats when devs avoided such hardware features? There is a similar case also with PS2 where devs barely took advantage of one of the VUs

Well the Dreamcast wasn't worth investment in most companies eyes seems to be the most clear cut answer really, in a sec I'll mention a similarity it would have with the PS2 had the DC same work/budget/ time.

For ninja docs I noticed that mentioned ocram/ocindex / ocache modes but guess . It states don't touch these when using the ninja API. Iam guessing naomilib isn't any better. For kamui I am guessing they didnt think it was worth the effort going all super low level cutting your CPU cache in half and using the other half as a really fast scratch pad for matrix/ vector math ( useful for tnl, physics) and having to manage all manually. So c/c++ with some ( at most) assembly is their choice.

So what did these guys do? They decided to use the scratch pad method and do up to 128 verts ( if I remember) in that 8kb to do the tnl math and just jam to the GPU as quick as possible . Falco seems to have taken a bit further and realized you can do the same and accelerate the physics too. Mind you this guys are technically at a disadvantage because they don't have the use sh4 pipeline simulator that gives you exact readout of how the code runs and where it cache misses( professional devs had this!) . They had to make their own homebrew version that isn't 100% accurate. This is the similarity I was speaking of with the PS2 . Using its vector co-processor to assist in fast math while using its cache as fast memory( in this case dc only had its fpu) and shove it to the GPU as quickly as they can.

You asked where are the professionals and why weren't they using this? It seems besides the lack of want to push/fund the dc it was also lack of time. I was reading in their gta3/dc3 chat that they found some homebrew dc code from a former DC dev who was pushing a lot of polygons ( something like 3.4 mpps at 60 fps). Upon inspection it seems he was doing a lot of the same tricks they were doing on gta3/vc to accelerate the matrix math ( But this demo was from 2001) but written fully in assembly.

Why does this matter? Because this was written by software engineer who was responsible for the graphics portion of the Dreamcast cancelled game soul reaver 2. Seems given enough time and genuine interest to push/fund the machine they would have gotten there past 2001.
 
Kaze Emanuar who is a prolific Super Mario 64 modder who has been delving deep into low-level n64 optimization has recently made a video about how that system's MIPS cpu also had barely ever used functions to section off areas of its cache as a manually managed scratch-pad.

So even in that system, that had a longer and more successful commercial life and 3rd party support still did not see those more exoteric functions be used to a significant extent.

And it shows in Kaze's work along with other recent devs, given they've been squeezing things out of that thing that I'd never had thought possible before. Some dev recently implemented deferred texturing on it of all things...
 
I'm taking @Sega_Model_4's comments to mean that within its lifetime, the PS2 was ultimately well utilised due to what was learned about the machine and games often being designed specifically around what was learned. Early on PS2 development was extremely difficult, but with the resources and the talent applied to the machine it ended up being well very well. Huge amounts were learned about how to use it, and a massive base of experience was developed.

I'd agree with this, and add that due to the incredible commercial success of the machine and it's long life, developers got to make multiple games on the platform and apply that learning to heavily reworking their engines and code base and production pipeline. This is not to say that the PS2 was not capable of more that it achieved over its commercial lifespan.

The DC never really saw anything like this. PS1 ports, or speedy conversions of hastily made arcade games, produced over a very narrow window of time, can't compare. Shenmue 1 and 2 were huge achievements for the time, but the second game wasn't a made from the ground up with everything learned from the first - the games were in development simultaneously with most of the assets from Shenmue 2 pre-dating the release of Shenmue 1 and in many cases pre-dating the launch of the console itself. Probably the closest we saw to the progression the console was really capable of was Sonic Adventure 1 to Sonic Adventure 2, but even then the latter part of development was done after the console had been cancelled. Even the WinCE versions the machine got turned out to be terribly inefficient, and only slowly getting less bad over time.

The scale of what the homebrew community on the DC has done and demonstrated would not be possible on the PS2 simply because PS2 actually got a fair crack of the whip. Again, this is not to say we saw everything the PS2 could have done, but enthusiasts on the DC are filling in some of the gaps in software progression that were never there on the PS2 because it had a full and happy life.

For a long time the narrative in the gaming community was that it wouldn't have mattered if the DC lived longer: it was already maxed out because it was so easy to develop for and it would only have fallen further and further behind. And that had to be true because there was no software that proved otherwise. GTA3 - being the defining game of the first half of the generation - was often the poster child for this. "It couldn't have handled GTA3, it couldn't have stayed in the generation, it was a mid gen stop gap" etc etc.

In an attempt to hold onto this (IMO mistaken) idea, there's been a lot goal post moving since GTA3 tech director said basically "yeah DC could have done GTA3, the move to PS2 was for commercial reasons". I won't go into that because it's not productive ... other than to say I had a good old laugh when the talk got to somewhere around "... but Vice City was more advanced, and it probably couldn't do that, so DC GTA3 doesn't change anything!". And then bam, DC Vice City alpha drops. LMAO!
Nailed it!
 
Back
Top