Software rendering in games (*56k*)

Kaotik

Drunk Member
Legend
Supporter
As everyone knows, we can't have same quality with software as we can with 3d accelerated rendering, not if we want more than 1 frame per day or something :LOL:

Anyway, I was wondering, what are the most advanced software rendering engines used atm in games, and what can they do, what effects they miss, what they don't etc

as an example of best I remember seeing, I take PixoRenderer (or something like that) from Unreal Engine, took some shots from UT2004 with it, first with default settings then some with maxed settings from ingame menu & bit ut2004.ini editing

the game flies with my rig, which nowadays includes Matrox Millennium PCI (My R9800 Pro broke down, sending it for RMA next monday), even at stock clocks (P4 2.4C) it's playable, with the normal clocks I use (@ 3.4GHz) I bet I could even raise the resolution at least to 800x600 while maintaining playable framerates with couple bots around.

Anyway, here's the shots, which imo look pretty good considering it's software with playable framerates, they're taken @ 640x480 resolution and resized with almighty MSPaint to fit rules of another forum
1.jpg

2.jpg

3.jpg

4.jpg

5.jpg

6.jpg

7.jpg

8.jpg

9.jpg

10.jpg


The point of this thread is that you people explain me where is software rendering today, what we can do, what we can't do and which games feature the most advanced software renderers around 8)
 
Software rendering is dead. It is interesting for the purposes of research when a particular feature is not yet supported in hardware, but not in real games.

The best software renderer in a retail game that I know of is the one that first shipped with the original Unreal, and later with Unreal Tournament and the first generation of Unreal-engined games.
 
As you can see, the texture-filtering is fake, it's just prefiltered maps.
And it seems that the renderer doesn't render at the screen resolution but has a 2x2 magnification.
So it's not quite the same quality as a regular 3d accelerator in 640x480 would get.

Per-pixel operations such as lighting (bumpmap), cubemap reflections and mipmapping are also very expensive in software, so they will usually be faked or avoided.

And ofcourse things like shadows are not possible in software because of the fillrate demands.

But if you cut some corners, you can still play a game with a software renderer apparently. Although one could argue why one has a 2.4 GHz CPU and no 3d accelerator whatsoever (well in your case it's obvious, but it should only be temporary, right? :)).
Even the Intel Extreme onboard GPUs found on most P4 systems should be able to render this game with better quality.
 
indeed there's 2x "zoom", but it can be disabled from ut2004.ini

and yes, things are faked, of course, but out of curiosity, it's nice to see how good quality with playable framerates we can get nowadays
obviously some people think there still is need for software renderers as the pixo-something renderer wasn't iirc originally included in Unreal Engine v2.0 builds, but got included later.

and yes, in my case it's temporary, for a week or so, unless I find some AGP 4x (or "universal AGP") video card from some closet.. only possibility being Matrox G200 unless it's AGP1/2x limited, don't remember anymore.. and my V4 4500 is in my friends mothers work computer right now so I can't use it either :cry:
Hopefully finnish post will be fast (which it never is) shipping my 9800Pro to Guillemot to Denmark and hopefully they'll quickly send me a new one..
 
Kaotik said:
Anyway, here's the shots, which imo look pretty good considering it's software with playable framerates, they're taken @ 640x480 resolution and resized with almighty MSPaint to fit rules of another forum
Just so you know. . . you could always just post links to the images and then they could be whatever size you want.
 
Ostsol said:
Kaotik said:
Anyway, here's the shots, which imo look pretty good considering it's software with playable framerates, they're taken @ 640x480 resolution and resized with almighty MSPaint to fit rules of another forum
Just so you know. . . you could always just post links to the images and then they could be whatever size you want.

I know, but I wanted them to be directly in the thread on the other forum
 
indeed there's 2x "zoom", but it can be disabled from ut2004.ini

Is it still playable then? :)

and yes, things are faked, of course, but out of curiosity, it's nice to see how good quality with playable framerates we can get nowadays

Well, since it effectively still renders in 320x200, which Quake 1 already did, it's not too surprising really. Quake required a P60 I think... and current CPUs are about 100x as fast? :)
So over-simplified: if you still render at the same resolution, you can get 100x the detail on every pixel :)
But since you rarely saw software renderers since Quake 2, it may look like a large step.
PixoMatic isn't the only one though...
I wrote this one myself (shameless plug, I know :)), not too long ago: http://www.pouet.net/prod.php?which=10808
And then there is Nick's thing: http://sourceforge.net/projects/sw-shader
Both are recent software renderers with near-hardware quality, like PixoMatic.
But I stopped doing software rendering now... well at least, triangle rasterizing/realtime stuff. I have moved to hardware, and the only software rendering I use now, is non-realtime stuff... precalcing things/offline rendering.

obviously some people think there still is need for software renderers as the pixo-something renderer wasn't iirc originally included in Unreal Engine v2.0 builds, but got included later.

It also got enabled in MOHAA after a certain patch. But I tried it on my brother's P4 2.4 GHz, and it wasn't exactly a success :)
I have no idea why it is included, since even onboard GPUs are quite good these days.

Hopefully finnish post will be fast (which it never is) shipping my 9800Pro to Guillemot to Denmark and hopefully they'll quickly send me a new one..

Good luck on that :)
 
computers are 100 times faster, but memory isn't unfortunately.

I loved the Quake 1 engine, I still think it's one of the best pieces of consumer software written in the last er...well, since it was written.

Personally, if I was writing a game right now I'd render it in software. The ray tracing demo on the net is more imersive at 15 fps and 320 by 200 than any game I've fired up in the last bunch of years.

The accelerator only seem to be improving the only thing I don't care much about which is texture filtering. It's impressive, but it's just eye candy and doesn't directly relate to gaming fun for me.

I've been secretly hoping for a software rendering renaissance to happen. using zero overdraw, there must be enough memory bandwidth to do some kind of anisotropic bilinear filtering at a medium res. At game specific edge AA and you'd have something that would rivle games of only a couple of years ago sans a few fillrate gobbling features.

Teh big reason for hardware accelration was to get away from 8 bit colour and point sampling. Modern systems should be able to handle that now at lower resolutions.
 
Jerry Cornelius said:
computers are 100 times faster, but memory isn't unfortunately.
It's really not that bad. It's hard to have a software renderer that uses all 6.4 GB/s bandwidth on a Pentium 4. And caches and prefetching make sure that the average latency is also close to a 100 times smaller.

Quake I actually didn't do much more than reading values from a 2D buffer into a scanline. No filtering. No mipmapping, except per polygon. No lightmapping. That's correct. The trick they used to do have (magnificent) light effects was to premultiply the texture with the lightmap, and use that as a new texture. A caching system made sure the premultiplying is done only when a new part of the scene becomes visible, and allows to buffer them for lights that swich on and off. However ingenious this is, it is quite inflexible. It was designed for Quake I and a few derived games, and that's where it will stay. Requiring a specialized renderer per game is a waste of time.

So as soon as we step away from this inflexiblity and try to do things more generally, it requires tons of more operations per pixel. Even something as 'simple' as bilinear filtering is very inefficient on a CPU. It requires four texels to be read instead of one, weights have to be calculated for these four texels, all twelve (or sixteen) color components have to be multiplied by their respective weight, and then added to form one sample. This ain't a simple 2D lookup any more. Even using MMX, I've never been able to do it in less than 24 clock cycles. Add dynamic lightmapping, accurate mipmapping and perspective correction, and it gets close to 100 clock cycles. That's what happens in Real Virtuality.

Although I'm working on techniques for higher parallelization and greatly reducing overdraw, software rendering is just a fun hobby. Anyone serious about playing games can spend the money for a really low-end graphics card, and they're better off than with software rendering. The only exception to that is when the game requires minimum features that are not supported by the hardware, or when there's no hardware available at all. But really playing games, other than Quake I, is better done with hardware.
 
Does anyone here know what features Outcast's renderer had? IIRC, it was also a software renderer, and needed a damn fast CPU back then (Athlon 600 for 640x480)... I think it had very good reflections, bump mapping and such stuff.
 
Anteru said:
Does anyone here know what features Outcast's renderer had? IIRC, it was also a software renderer, and needed a damn fast CPU back then (Athlon 600 for 640x480)... I think it had very good reflections, bump mapping and such stuff.

It's Voxel landscape + polygons for characters/houses...
Had shadows, reflection, refraction (?), all characters had recorded voices... Still no game that come close to this masterpiece.
 
Personally, if I was writing a game right now I'd render it in software. The ray tracing demo on the net is more imersive at 15 fps and 320 by 200 than any game I've fired up in the last bunch of years.

What raytracing demo would that be?
The only raytracing demos I've seen in realtime would do very simple texturemapping, reflections, refractions and shadows on some spheres and cylinders (with horrible aliasing).
A modern game like Doom3 does all this aswell, but on actual animated, skinned geometry (big problem for raytracing), at higher resolutions and better quality than a realtime raytracer can.

The accelerator only seem to be improving the only thing I don't care much about which is texture filtering.

Have you been living under a rock?
Take a look at some of the latest NVIDIA or ATi demos, or previews from UnrealEngine 3.0. Especially NVIDIA's Timbury is nice. That's a Pixar-like movie in realtime (and no, Pixar does not use raytracing as their primary rendering method. I can't stand it when people think raytracing is the best rendering method for quality, and even think that Pixar or PDI/DreamWorks use raytracing. It's not, and they don't).
 
Scali
lol, well maybe a bit. But lot's of polygons, pixel effects etc...hasn't really thrilled me much. The last couple of years has been advancing texture AA, which is nice, but the difference when playing games is pretty ho-hum.
The leap has been an accumulation of years of tweaks to get us here, nothing really revolutionary, at least to me.

http://www.realstorm.com/

Nick
The neat thing about Quake 1 was it was a zero overdraw engine, so gameplay was pretty smooth utill you got too many z-buffered bad guys on the screen at once. I haven't played anything since that preserved the "bubble" as well as it did.

I'm downloading the rv zip, anxious to have a look at it.

edit: crap, I'm running a barton, oh well
 
Scali said:
Take a look at some of the latest NVIDIA or ATi demos, or previews from UnrealEngine 3.0. Especially NVIDIA's Timbury is nice. That's a Pixar-like movie in realtime (and no, Pixar does not use raytracing as their primary rendering method. I can't stand it when people think raytracing is the best rendering method for quality, and even think that Pixar or PDI/DreamWorks use raytracing. It's not, and they don't).

stating raytracing is not the best method for high quality is simply a false statement. but you can state that rastericing can be good enough, and pixar is a great proof. doesn't change the fact that for anyone who cares about graphics, raytracing is the logical way. if you want to capture 3d graphics. if you just want to paint stuff around, rastericing is way to go.

thats what they actually do. the rest is abuse. great abuse. still abuse.

but i guess mister "opengl is shit" will just say no.
 
stating raytracing is not the best method for high quality is simply a false statement.

And why is that?
There is plenty of proof around for methods that give either better results than raytracing (raytracing is not physically correct, as we all know, so it is by no means the ideal method), or give equivalent results in a fraction of the time.

doesn't change the fact that for anyone who cares about graphics, raytracing is the logical way

So according to you, Pixar doesn't care about graphics? Or is not thinking logically?
 
Jerry Cornelius said:
Nick
The neat thing about Quake 1 was it was a zero overdraw engine, so gameplay was pretty smooth utill you got too many z-buffered bad guys on the screen at once. I haven't played anything since that preserved the "bubble" as well as it did.

Ummm, Quake 1 wasn't a zero overdraw engine. It drew potential visible leaves from the BSP in back to front order while filling the Z-Buffer. But, there was no guarantee about no overdraw (potential visible leaves from the current leaf though cut down a lot of the ones that would be overdrawn though)

What was more impressive was the idea of drawing polygons of models that are far away just by drawing their vertices and then subdividing the triangle if one of the edges was longer than a pixel and drawing the new vertices.
 
Cryect said:
Ummm, Quake 1 wasn't a zero overdraw engine. It drew potential visible leaves from the BSP in back to front order while filling the Z-Buffer. But, there was no guarantee about no overdraw (potential visible leaves from the current leaf though cut down a lot of the ones that would be overdrawn though)

What was more impressive was the idea of drawing polygons of models that are far away just by drawing their vertices and then subdividing the triangle if one of the edges was longer than a pixel and drawing the new vertices.

You must be thinking of Quake II or III. The first engine used sorted spans to completely elimate overdraw for the static geometry. I think one of the Unreal software engines used something similar.

Scali
That was pretty cool. There's no argument that hardware runs faster than software, you're just stuck doing hardware things the hardware way, and general computing is getting pretty damn fast these days.
 
Back
Top