Obviously, you'd have to reduce the poly count and simplify the lighting. The lighting and skinning seem like the most difficult aspect graphically. The Naomi 2 games released don't seem to push static geometry amounts and fillrate as far past what the DC can do, compared to lighting and skinning. None of the Naomi 2 games released seem very CPU intensive or complex, so they all seem pretty doable with enough graphical reductions.Hmm what do you think about naomi2 ports to DC. Basically the same hardware with tons of memory and a really unique tnl unit.
Texture wise, Naomi 2 games are potentially of easier to port to the DC than a Naomi game could be. Since on the Naomi 2 the larger polygon count would require more video RAM to store the scene data, leaving less for textures than a Naomi 1 game. Since the DC can't attempt Naomi 2 polygon counts, it's scene buffer would be smaller and would take a smaller percentage of video RAM, leaving more room for textures and less of a need to downsample (the DC is still at a disadvantage overall, just less than you might expect). A Naomi 1 game going all out texture wise (like Asian Dynamite) would need more downsampling for a DC port than a Naomi 2 game. Most Naomi 1 games do not go all out, and seem to use a lot of uncompressed textures.
VF4 compresses textures that I wouldn't expect to AM2 to compress, like character faces. It kind of gives the impression that the larger scene buffer was causing video memory pressure. But there could be other reasons, like they could have just changed their mind about how unacceptable the compression artifacts are, or having more powerful computers made them more willing to deal with slow VQ compression.
I was trying to say that you could improve on the textures in the home version. There are several textures left uncompressed on the DC (like the sky, taxi, HUD) that you could go and add compression to, and you could improve the compression ratio on the ones that are compressed to free up more room, allowing for more textures to stay at the original arcade resolution.So that's what the DC CT developers meant when they (via a translator) talked about DC textures being more highly compressed!
An S3TC 256x256 16bpp mipmapped texture would be about 42.7 KB, while VQ would be about 23.3 KB. For a similar 128x128 texture, the difference would be smaller due to the 2KB overhead, with 10.7 KB for S3TC and 7.3 KB for VQ. but VQ could be reduced somewhat (typically by 0.5-1.5KB) at the cost of quality.Dreamcast made lots of use of 2bpp VQ textures, where as the most directly comparable S3TC mode was a 24bit (no-alpha) texture compressed down to 4bpp. Both were lossy, with 2bpp VQ naturally being lossier.
There's no high quality compression format, just one VQ option. VQ on the DC takes the uncompressed texture and reduces it by 1/8th, then adds an overhead of up to 2KB on top for the codebook (which is like a fancy palette, but tiles instead of a single color). You can use a smaller codebook size if the overhead is too much, but it also reduces the quality of the compressed texture. For a 16BPP texture, and ignoring codebook size, you end up with a size of 16BPP/8=2BPP. The only real texture format with a size of 4BPP is uncompressed 16 color.Pretty sure DC also supported 4bpp VQ textures which were much higher quality than 2bpp. If you wanted to use transparencies on DC I think you needed to use 4 or 8 bit CLUT textures or RGB I think. Transparencies with S3TC were 8bpp, again IIRC.
As with most forms of image compression, ongoing work into improving compressors pays off. DC was dead before any S3TC equipped consoles arrived.
You could fake higher quality compression by stretching the texture to double width or height, basically halving the compression ratio (so for a 16bpp texture, instead of 2x2 tiles, you have 2x1 or 1x2), and effectively compresses to 4BPP, but texture filtering would be wrong, so it only works well for point sampled stuff. Scaling the texture up to double height and width then VQ compressing is equivalent to an 8BPP texture.
You can VQ compress any texture format, including formats with alpha, normal maps, 8BPP, and 4BPP, which are all 1/8th the size of the original plus codebook overhead. A compressed 1024x1024 4BPP texture without mipmaps would be 66KB. But there's a hardware bug when both mipmaps and texture filtering are enabled for palettized VQ (see the attached screenshots here, you might want to zoom in), and the quality for 4BPP VQ is poor, which uniquely becomes becomes far, far worse, when mipmaps are enabled, for reasons I won't bother to fully explain right now (short explanation is that the tile shape is no longer contiguous or consistent,) so they aren't as easy to use as you'd hope.
The official PVR driver/texture compressor handled small codebooks in a weird, over-complicated, and inflexible way, with only a few sizes/mipmap combinations having small codebooks, and always with a fixed size, but they aren't hardware limitations. The official compressor also didn't seem to be able to handle non-square textures, but the hardware handles them fine if you aren't using mipmaps. No commercial game ever took advantage of compressed 8BPP or 4BPP because a compressor didn't exist for them then.
I think most games would have problems loosing 600KB of texture space. Shenmue is unique because it left RAM free for character streaming, and it's streaming could handle running out of memory gracefully.@TapamN you mentioned that you made Shenmue render in 24 bit colour, is there an universal flag or something to force more games to do so? I hate dithering with all my soul I've patched many games to disable the dithering but the banding produced by the 16bit image is quite noticeable.
It can't be done universally the way my deflicker disable code works, but it's kind of universal in the sense that it's modifying the same driver (ignoring WinCE) the same way in each game. You'd have to find where the PVR driver is initialized and change the frame buffer color depth there. It's not actually that hard once you know how to do it, and could probably even be automated with some work, by searching for patterns that match the PVR driver.
You mean stuff like this at 6:33?For instance I increased the Draw distance in crazy taxy to the max, the console surprisingly still runs at the same framerate as the vanilla version but the moment it draws double the polygons on screen the tile renderer starts to glitch and the console even freezes, in other games it just glitches until less polygons are on screen I could record some gameplay to illustrate what I mean. ( Or Maybe my console is just dying lol)
Maybe @TapamN knows more about what may be the cause, also it may seem like a stupid question but is there a way to disable that tile rendering and the independent order translucent in the console? I think specially the latter slows down rendering a lot when there are transparencies near the camera.
To fix the tile rendering errors, you'd have to allocate more video RAM to the vertex buffer and/or OPB. The vertex buffer stores polygon data like rendering context, position, UV, and colors. The OPB stores per-tile pointers to polygons that might be in a tile. If either run out, you can add more polygons to the scene. If you find where the PVR driver is passed the sizes, it's not hard to increase them, but then you could run into texture space issues here too. Patching that is about as hard as doing 24BPP.
Hardware transparency sorting can be disabled, but you'll have sort errors with transparencies, unless the game is already pointlessly manually sorting things. There's no overhead for transparency sorting if there are no visible, overlapping transparent pixels.
One thing that's more likely to work without huge side effects is disabling modifier volumes on transparent polygons, if the game uses them. Modifier volumes for transparencies increase sort load like any visible transparent polygon (and sorting seems to be O(n^2)), although its still a bit faster since they don't need texel samples. This would only really help with games that use large shadow volumes (like for building shadows) that can affect transparent polygons, and wouldn't have much of an effect for game that use them for just character shadows. It might also already be disabled.
I did a quick, incomplete 2x SSAA patch for Rent-A-Hero, and disabling modifiers on transparencies helped a lot to maintain 60 FPS in areas with building shadows. It still drops to 30 FPS in some place, but those look like they could be fixed by adjusting mipmap detail levels or recompressing the textures to somewhat lower quality.
Just because it currently requires 32MB of main RAM, doesn't mean the RAM usage couldn't be reduced to make it fit."It currently needs the 32MB mod" - So that kills any argument of it being able to run on a DC then
Looking at the model format for PS2 GTA3, there's a lot room for reducing the size of the models. It uses 32-bit floats for positions. These could be replaced with 16-bit values. You could do the same to reduce the size of UVs, from 8 bytes to 4 bytes. For some map geometry, you might even be able to calculate the UVs on the fly, by projecting the texture onto the mesh (like for floors or walls), but it would be hard to automatically convert models to this. Normals are stored as 3 bytes each, but most map geometry would repeat most normals, so they could be replaced with a palette of normals, then store a 1 byte palette index per vertex. The same could be done with vertex colors. You couldn't do all of this for all models transparently (a character model does not repeat normals the same way a building does, and would probably have to stick with 3 byte normals), but there's still a lot of room to save RAM just on the models.
Similar things could be done for the collision data. It looks like San Andreas already does some of these (like 16-bit positions), so that game would be harder to get running on a stock DC than GTA3 or VC. But reducing precision for collision could introduce gameplay bugs like small holes that you could fall though, so would probably require extra debugging to get working well.
With enough work, I feel like you wouldn't have to be cut anything completely to get GTA3 running acceptably on the DC, just shave a couple of millimeters off everywhere. Simplify the collision shapes for cars a bit, drop an iteration on the physics constraint solver for distant objects, cut the number of active cars or people by 2 or 3, spawn less litter objects, maybe the AI for pedestrians is simplified and sometimes they walk through each other, change the rain effect to be less fillrate heavy... stuff like that. But doing all that is a lot more work than doing a few big cuts. It's unlikely the fan conversion will put that much work in before they get tired of it, or Rockstar shuts it down, so I doubt we'd ever see DC GTA3 at its best.
In this case, looks like an issue with the emulator and transparency sorting.What causes parts to draw and disappear, like the amp? I know in some older sprite consoles, HW sprites would get shared across sprites and they'd share draw time, but here there doesn't seem any similar obvious reason for the minimap to get partially drawn.