That kind of wrap around texturing was bad practice even back then. You don't waste anything if you use texturing to texture single objects or parts of a model, instead of the whole model.
That's not really true. If you're using power of two textures and you don't want unevenly stretched textures - because they look bloody awful - you can easily end up with "white space" that consumes memory and does nothing. By packing lots of surfaces onto a single texture you can minimise the space lost in maintaining a regular distribution of texels over you polys. You also minimise the overhead of having a lookup table per texture. Changing the current texture you're sampling from also has an overhead, although I don't know big an impact increasing this chore several fold would have.
If you're loading everything in during one loading screen and you want decent texturing with properly spaced texels then large textures make sense. If you're streaming textures in and the gain from small textures is greater than the wastage then it's probably a different matter.
Particularly on old systems where a single texel often covers several or many pixels, uneven stretching across X and Y really is horrible though. I can't stress this enough.
The 8bit consoles had to define the world with the 16 colours, when you had polygons, textures and 16 mil colours to do it on PS2, so no comparison there.
A single texture could easily cover a large proportion of the screen (potential the entire screen), depending on camera and object placement. It's valid to point out that 16 colours was limiting in the 1980's and could easily be very limiting (and have a major negative impact) in later decades too!
You are not supposed to draw details when you have that many polys. The rare poster, painting, sign or other object in the games that needs more than 16 colours can be done with 8 bit fine without breaking the bank.
Details are still "drawn" using textures even today, even with 20 or 30 or more times the polygon complexity! You should try asking an artist whether there is any benefit to > 16 colours in their texturing work. Seriously.
Try doing some conversions of monocrome-ish stuff to 16 colours. Often if there is no large colour gradations (which could be done with vertex colours anyway) but many small details, it's quite hard to spot the difference.
And again, even if we suppose that colourdepth was a bigger problem than I'm making it out to be, there would still be no explaining the difference in resolution. S3TC has the same bits per pixel, and DC had fewer Mb's per frame.
Well if you don't think quantity of texture memory used and texture compression supported (CLUT vs VQ/S3TC) were a huge issue - and we've had devs flat out state that they were - what do you put it down to?
Unless of course you are implying that devs used 8bit textures very often to compensate, which I think we can safely say, based on anecdotal evidence and visual evidence, was not the case.
8bit CLUT textures, while they would have been on par with VQ in colours, or better in fact, would have been a complete waste of resources most of the time and would have been much blurrier.
And PS2 textures were, in fact, blurrier. I think for the DC ports they probably did use quite a few 8-bit cluts. Perhaps the PS2 was also less efficient with the way it stored textures - perhaps using more, smaller textures. This might lead to some of the issues I was talking about above, where textures are broken up and space (memory) is wasted. Maybe?
What size textures did the PS2 and Dreamcast support? I'm sure this has been discussed here before, but search is failing me.
If that was true, which I don't think it is, it's strange that not a single game chooses to go in the other direction. Even games that was quite spare on geometry and had limited environments, like ICO and Ecco the Dolphin only had slightly better textures than other games.
Incedently Ecco, a DC port, has some of the best textures ever on PS2, even though they are still downgraded from the DC original.
Ecco was really colourful. Using 16 colour textures for everything would have wrecked it - I can quite image that game using quite a few 8-bit CLUTs. ICO seemed to have more detailed geometry than DC games, so that should eat into memory (relative to DC games) too.
There was one game, Jack and Daxter, that had quite detailed textures, with none of the lowres blur apparent in almost all other games (the sequels didn't impress me as much) the downside was that it was mainly small heavily tiled textures, which was noticable once you saw it.
Point being, that it was not some kind of deficiency in the system that didn't allow a high texel to pixel ratio.
I didn't think there was such a deficiency - texture resolution is the only issue I can think of for the blurry textures thing.
They were more detailed without question. Look at Metroid and RE4 for some good examples. "256x256 and 512x512 textures all over the place.
I was thinking specifically of ports from PS2 to Gamecube (the kind ERP mention at the start of this thread), where the structure of the game might not have allowed the GC's A-ram to be used optimally.
Well detail-maps that are not supposed to be transparent in the details, water, window, quads where you don't want the hard edges of binary alpha, etc. Depending on the game of course, but they are not that rare.
They were on the DC! Opacity and binary alpha ahoy! The DC also supported 4 and 8 bit CLUTs though iirc (definitely supported some form of CLUT) so it had a reasonable fall back when VQ couldn't be used. I think some of the early Naomi games actually used quite a lot of CLUTs even when it didn't need to. I guess artist familiarity and liking a certain "look" that fits your older hardware could be issues, or perhaps the art was being worked on before the hardware and tools were finalised.