Digital Foundry Retro Discussion [2016 - 2017]

Status
Not open for further replies.
By using a software renderer they could specify texture coordinates for the vertices. This meant they could effectively "slide" a large environment texture around the polygons, dependent upon where the vertex or surface normal was pointing (I'd guess a lookup table for texture coordinate based on normal values was used for speed).
Yup, sounded a lot like what we did in old demos.
UV directly from normals * distance from center to edge of texture. (32 for 64x64 texture, if there is need for more mirror like appearance add small offset from screen coordinates. .)

When you know that the object is small in center of screen there is no need to have perfect texturing and you can bypass a lot of code like clipping etc.
 
Last edited:
By using a software renderer they could specify texture coordinates for the vertices. This meant they could effectively "slide" a large environment texture around the polygons, dependent upon where the vertex or surface normal was pointing (I'd guess a lookup table for texture coordinate based on normal values was used for speed).
But what I find confusing is that they made it "read" the information in triangles. Which I guess those triangles represented the texture coordinates that would "wrap" on the polygons. But the Saturn was using quads. How do you make triangles fit perfectly the quad surfaces?
Unless the very limited models that used the effect, were made of quads were one vertice was sitting exactly at a straight line between 2 other vertices or exactly on top of another or the software renderer was able to produce triangle based models running exclusively on the Slave Processor.
Which I guess these are the reasons why they couldnt implement it to more examples in the game.
It was on a separate plane that wouldnt fit with the rest of the 3D environment that was running on hardware on the other processor?
 
But what I find confusing is that they made it "read" the information in triangles. Which I guess those triangles represented the texture coordinates that would "wrap" on the polygons. But the Saturn was using quads. How do you make triangles fit perfectly the quad surfaces?
Unless the very limited models that used the effect, were made of quads were one vertice was sitting exactly at a straight line between 2 other vertices or exactly on top of another or the software renderer was able to produce triangle based models running exclusively on the Slave Processor.
Which I guess these are the reasons why they couldnt implement it to more examples in the game.
It was on a separate plane that wouldnt fit with the rest of the 3D environment that was running on hardware on the other processor?
They wrote a software triangle renderer and didn't use the quad hardware, just like you would on any CPU when you do not want to use GPU.

Then they overlayed the result to the other image into a correct location and size as a sprite.
 
Which I guess these are the reasons why they couldnt implement it to more examples in the game.
It was on a separate plane that wouldnt fit with the rest of the 3D environment that was running on hardware on the other processor?
To reiterate what others are saying hopefully for clarity, they wrote a PC-type renderer (eg. Frontier Elite II) running on the CPU. It drew and shaded triangles. They then composited this over the Saturn's GPU output by drawing the render as a sprite over the top.
 
So the software renderer on the Saturn's CPU was indeed drawing triangles and rendered indepentently from the rest which I guess this is why it would have made it almost impossible to have this implemented during gameplay (ie a controllable character rendered on CPU like Metal Sonic running on the GPU rendered scene)


Sent from my SM-J320F using Tapatalk
 
So the software renderer on the Saturn's CPU was indeed drawing triangles and rendered indepentently from the rest which I guess this is why it would have made it almost impossible to have this implemented during gameplay (ie a controllable character rendered on CPU like Metal Sonic running on the GPU rendered scene.

Not necessarily. The Saturn has 2 main CPUs and they only used the one of them for this.
 
By using a software renderer they could specify texture coordinates for the vertices. This meant they could effectively "slide" a large environment texture around the polygons, dependent upon where the vertex or surface normal was pointing (I'd guess a lookup table for texture coordinate based on normal values was used for speed).
thanks for the explanation.

That was an incredibly cool effect at the time! :) Maybe I had seen it before, but it was when I played Need for Speed Hot Pursuit III for the first time that I totally remember it and impressed me. Seeing reflections on the hood of the card thanks to the 3D accelerator card was such the experience.., I was in love with the hardware renderer because of it. NFS 3 was playable with a software renderer but some effects like environment mapping and coronas were amiss, iirc.
 
Not necessarily. The Saturn has 2 main CPUs and they only used the one of them for this.
Yeah but then there is this:
They then composited this over the Saturn's GPU output by drawing the render as a sprite over the top.
To me it sounds like the specific effect was rendered as a completely separate object by the CPU,composited over (as a sprite) and hence could not belong with the rest of the 3D rendered objects which were handled and rendered completely by the VDP1. It sounds like an overlayed "object" on top of the normal rendered scene.
edit: and maybe even if it could belong with the rest, I think it would have been too much load for the CPU to render the object from every possible angle in coordination with whats going on with the rest of the rendered scene.
 
Last edited:
To me it sounds like the specific effect was rendered as a completely separate object by the CPU,composited over (as a sprite) and hence could not belong with the rest of the 3D rendered objects which were handled and rendered completely by the VDP1. It sounds like an overlayed "object" on top of the normal rendered scene.
edit: and maybe even if it could belong with the rest, I think it would have been too much load for the CPU to render the object from every possible angle in coordination with whats going on with the rest of the rendered scene.
There was no depth sorting, so you couldn't have the reflected elements be hidden if something got between them and the camera. I guess this is why the feature didn't appear on major in-game elements. As to whether there was too much going on for the CPU to compute anything else, texture-mapped 3D was running on PC in entire games (Elite Frontier) with a 386 or such, so I doubt this rendering saturated the processor. Bandwidth may have been a concern due to the shared bus.
 
I cant give this guy any kudos when 30+ years ago people were doing complex 3D math in games like Elite and Mercenary on 8-bit processors. Try doing fast 3D without any hardware float support, or even division or multiplication instructions.

Bah humbug.
 
I cant give this guy any kudos when 30+ years ago people were doing complex 3D math in games like Elite and Mercenary on 8-bit processors. Try doing fast 3D without any hardware float support, or even division or multiplication instructions.

Bah humbug.
Yeah, he talks about how it took large teams to make these games back then. While some AAA games were indeed created by half-a-dozen people teams back then, quite many were singlehandedly created by a sole coder-designer with maybe an artist and a sound designer. And few were as simplistic as his.
Retro City Rampage is a much more complex and fully featured home-brew game for the NES thar I believe a single dude created.
But still, lets not bash the guy too harshly. For someone to try and make an actuak functioning Game Boy game in 2017 just for shits and giggles is still noteworthy, albeit not entirely unheard of. And its very nice of him to share his postmortem with the web, so thanks for that.
 
For someone to try and make an actuak functioning Game Boy game in 2017 just for shits and giggles is still noteworthy, albeit not entirely unheard of. And its very nice of him to share his postmortem with the web, so thanks for that.

You know this author is an indie dev?
 
There was no depth sorting, so you couldn't have the reflected elements be hidden if something got between them and the camera. I guess this is why the feature didn't appear on major in-game elements.
Yes thats what I ment but didnt know how to put it into words.
As to whether there was too much going on for the CPU to compute anything else, texture-mapped 3D was running on PC in entire games (Elite Frontier) with a 386 or such, so I doubt this rendering saturated the processor. Bandwidth may have been a concern due to the shared bus.
The reason why I suspected as such is because they were using multiple processors to render different aspects of the scene so the CPU rendering discretely the reflected object, had to sync and calculate additional information based on what was going on with the rest in order to make it display properly (angle, scale, depth etc). I guess the bandwidth as you said might have been the bottleneck.
 
Syncing up angle and etc is not that hard. All you need to know is the camera's coordinates which is just a dozen or so variables.
Depth sorting is not that big of an issue either, not in thar gen at least. Regular "gpu" rendering then had bad depth sorting itself too. No z-buffer, just per-primitive sorting through painter's algorythim, with the sorting oftentimes done on a per-object basis as an optimisation and sprite billboards used as impostors for 3D objects being an extremelly common trick, with all the artifacts that this incurs being considered acceptable for the standards of those times. A chrome object could very well be inserted in the scene as a billboard impostor that just happens to be rendered by the CPU in realtime just fine.
I really think the only bottleneck was bandwith. Texture mapping is a big bandwith hog, and so is simply rasterizing filled polygons to some extent, specially without the hardware acceleration for it. If the 2nd cpu had its own memory pool or very fat caches, that might have been more feaseble, but with a shared bus, that silly effect would handicap the rest of the system too much.
 
DF Retro: Donkey Kong Country + Killer Instinct - A 16-Bit CG Revolution!


/ Ken
When I first saw Donkey Kong Country on a SNES I couldn't believe my eyes. It looked easily as good as anything you could play on more powerful systems that came out the next year. The impressive visuals and brilliant platforming gameplay made the game an icon, I think. I enjoyed the sense of humour, Cranky's Cabin and how amusing the old chap looked, and the simple fact of finding the letters of a word.... In the end it is a simple pick up and play 2d game which also had an awesome 2 player mode because as player 1 you can control Donkey Kong and player 2 takes control of Diddy Kong, and you can tag in and out when you are playing a level.

Another thing I loved about the game is that you never "walked alone". The AI was decent and kept up with you most of the time and it felt good having a partner in crime the entire game. The smart use of barrels was a thing, too. And of course the music...

The 1st game is my favourite of the series, but I gotta admit I never played the other two games as much time as in the original, in fact I barely played them, so maybe I missed an even better, more polished experience...

As for KI, Fulgore, the wolfie and the skeleton fascinated me back then but didnt play the game.
 
Last edited:
Status
Not open for further replies.
Back
Top