OldSchool - How did 16bit era scanline based sprite hardware work internally?

milk

Like Verified
Veteran
Inspired by the prolific "questions about the sega saturn" and "questions about the playstation" I felt like asking my own about SNES/MEGADRIVE and similar arcade hardware.
Does anybody here has experience on this, and the patiance to explain it?

I know how they work from the perspective of a programer. What are their capabilities and limitations. But I wonder, how do they go from getting the imput from the program into raster output. As far as I understand, the SNES , for one, fetches all graphics needed for each scanline into on-chip registers and blits them into a line-buffer for sprites, another for the playfield layers, all within the timeframe of Hblank, and when the screen is actually being scanned the snes is only composites both layers. Is that accurate? Does anybody know what the hardware that does that actually looks like? Is the linebuffer encoded in actual RGBA direct color, or still in indexed mode? And the Mega Drive? It shares the same palettes for sprites and it's playfield characters, and supports no form of transparency or blending, so I guess it doesn't need separate line buffers for BG and Sprites. Is that correct? Were there smarter aproaches at the time for scanline-based rendering (no framebuffer). What was the reasoning behind those choices? I always wondered if there were ways to make the operation of those machines less rigid? Were there more programable alternatives possible at the time that could be as performant (60fps) and memory-friendly (again, no direct color, or full framebuffer) as the SNES and Mega Drive?
 
Back
Top