For different types of games, or parts of games, you usually would make an engine, or sub engine that is tuned to the environment. For example indoor Quake style, big outdoor environment, fly simulator, space game etc. But this type of engine is in my opinion more dictated by the environment you want to render than if you want to render it in hardware or software.
I'm not sure if I agree with that. With hardware you don't have too much of a choice... basically you can either use deferred rendering or immediate rendering... but other than that you don't have too many options. So most of your engine is more about determining visible sets and sending them to the hardware efficiently.
With software however, there are many rendering methods to choose from, and some of them can be made much more efficient when you design your engine around them.
So I'm not sure if the order should be see as: environment -> engine -> renderer
Or perhaps: environment -> renderer -> engine
I think it's more like: environment -> (renderer+engine)
It's not always entirely clear where the distinction between renderer and engine is, in a software approach.
For Quake I rasterizing is the dominant factor, because there really in not much lighting going on it's basically just texture mapping.
Yes, but I think that is a result of the hardware they had to work with, not a result of the type of environment/game they chose.
They just knew that Quake had to run on a Pentium in software, so there wasn't a lot of room for high-poly models or fancy lighting. Low-poly single-texturing was pretty much the only thing the Pentium could do in realtime. Hence they designed their environment around it.
Early 3D accelerators were not capable of much more either, but I doubt that they put much thought into running on hardware accelerators during the development of Quake.
So I think Quake 1 is predominantly rasterizer-bound because it was designed that way, not because of the type of game/environment in general. A sign of the times.