Software Rendering Will Return!

Given that I'd accept that a SW renderer delivers now identical output (can't really see anything from that link above) to a GPU, UT2k4 is still a 4 year old game and 800*600 with mediocre performance is still too far behind to convince me of anything.
Let's leave it at that then.

The link works but apparently the servers are quite busy. Keep trying.

Thanks.
 
3dilettante, your arguments are all very good... when talking about optimizing a system meant for 3D rendering at a low cost.

What I'm talking about is systems for which the end-user doesn't even ask for 3D 'acceleration' at all. It's the same people who ten years ago would never have bought a Voodoo 2. These systems are hardly even meant for casual gaming. So let's call them office systems for a second if that makes things clearer.
Assuming we're talking about all Vista and Macs sold for the next 4-5 years, that's a pretty good baseline.
Blame big corporations looking for a way to justify continued upgrades for their software for the added cruft.

Now, in the not too distant future every system will be equipped with a Sandy Bridge quad-core CPU or better. That's twice the cores, twice the clock frequency and twice the SIMD width. Three-operand instructions and other AVX extensions are likely going to increase efficiency as well. So what is already true for today's dual-core CPUs is only going to get more true.
Nehalem's mainstream variant with added IGP and AMD's Fusion will beat initial shipments of Sandy Bridge by a year to six months. General mainstream availability of Sandy Bridge quad cores in office systems will lag even more.

It's possible the added graphics initiative will fail, but I suspect the market will like the steady transition from prior software practices (Vista and Leopard+ by that time).
The laptop market, particularly the growing market of low-power and cheap laptops that still cares about power efficiency will like it as well.

So thanks for the discussion. It's clear to me now where software rendering belongs and what its future success depends on. I hope you can somehow relate with my perspective, and I wish you all the luck with whatever you do to improve 3D hardware rendering.
I don't have as clear a vision as you have, I guess.
I believe you are likely right that Sandy Bridge would be good enough to be passable in 2010 for running a lot of apps we have today, but I see roadmaps and initiatives from manufacturers and designers that lead me to think they won't be waiting that long.
 
Let's leave it at that then.

The link works but apparently the servers are quite busy. Keep trying.

Thanks.

I'm not implying anything but with a clear browser cache the first shot appears flawlessly yet the second won't. Have a look if there's something wrong with the link.

What I'm talking about is systems for which the end-user doesn't even ask for 3D 'acceleration' at all. It's the same people who ten years ago would never have bought a Voodoo 2. These systems are hardly even meant for casual gaming. So let's call them office systems for a second if that makes things clearer.

While I've been side-tracking on something totally different I was following at the same time the (highly interesting I must admit) conversation. I never had the impression from the start that you meant anything different than "office systems" and that's exactly the point where I disagree with the description "casual gamer" in former posts.

Here 3dilettante has a point when he says that also hybrid CP/GPU cores will likely appear and that's exactly the part where I mentioned that such cores might replace IGPs as we know them today eventually.

There's still dedicated graphics hardware in such a system and the only question mark I have at the moment is whether Intel will use it's own in house developed technology or if they simply use temporarily IMG IP for that (but that's a rather irrelevant detail here).

My points for the power critical PDA/mobile/UMPC markets also went unanswered. Here Intel chose from the very birth of 3D in that market to address it with a graphics unit incorporated in their SoCs just because addressing it with a whatever higher end CPU would had blown the power consumption out of any tolerable measures w/o making a significant difference. In those SoCs the frequency difference between the host CPU and the graphics unit is approximately ~8:1. And no those SoCs aren't exclysively sold for graphics purposes either:

http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/03-10-2008/0004770435&EDATE=
http://www.imgtec.com/partners/Intel-Corporation.asp
 
A little bit of a detour but is Larrabee going to require high speed memory and a pcb?

Certainly the high end.

How in the world will Intel compete against the full product line of NVIDIA. It is really hard to imagine them coming out with things that compete with 9600 class products at the price NVIDIA sells these sub-systems for. Not anytime soon anyways. NVIDIA is a very low-cost provider and has spent years on reconfigurability and everything else.

The next mainstream generation (DX11?) from NVIDIA are likely to be C++ compatible and very powerful. Even the low end chips are likely to be very good performers.

Larrabee is certainly not going to wipe these off the map overnight. Not a chance unless Intel miraculously comes out with a top to bottom family of great performance at a great price. And that means massive fab expansion (a risk that could doom the company's financials) or a trip to the same fabs that NVIDIA uses.

So if you are a developer are you going to program for what is in peoples systems or some nebulous promise. That is probably why Sweeney is already playing with CUDA.
 
The next mainstream generation (DX11?) from NVIDIA are likely to be C++ compatible and very powerful.
People keep saying things like this and I wonder what you really mean by "C++ compatible"? There's not like some gold standard that "we want to run C++ on the GPU"... indeed the point is that C++ is generally ill-suited to GPU-like processor architectures, hence why we have several more parallel options. I could make the argument that you can already run "C++ on the GPU" with CUDA and/or RapidMind, but neither is the same as programming scalar code. But that's indeed the point, we want/need different languages and features to specify massively parallel computations.

So if you are a developer are you going to program for what is in peoples systems or some nebulous promise. That is probably why Sweeney is already playing with CUDA.
The languages used to program these things are largely incidental. Because someone is playing with CUDA doesn't necessarily mean that they won't also "play with CTM" or "play with [whatever Larrabee uses]". RapidMind is a clear demonstration that the concepts between a lot of these processors are really the same and it's not particularly hard to abstract across them while maintaining high performance. Certainly NVIDIA, Intel and AMD would much rather lock you in to their API/hardware, but that as well is entirely besides-the-point.
 
Last edited by a moderator:
Certainly NVIDIA, Intel and AMD would much rather lock you in to their API/hardware, but that as well is entirely besides-the-point.

Or maybe that is exactly the point. The problem being that if the 3 vendors don't agree on a common "software level" API, then this alone is a serious problem for bringing forth software rendering.

With any luck DX and GL can morph into this "software level" GPU API, and us devs won't have to deal with the problems of 2-3 completely incompatible APIs. Of course with the way things are going, it looks as if this is going to take quite some time.
 
Or maybe that is exactly the point. The problem being that if the 3 vendors don't agree on a common "software level" API, then this alone is a serious problem for bringing forth software rendering.
Yes fair enough (and clever segue back on topic :D). My point was more that programming languages are kind of unrelated to the utility of the underlying hardware at the moment; namely, we need *new* programming languages to program these things and since there's no good "standard" right now (other than arguably DX/GL, but those are becoming less suitable as APIs for directly controlling the hardware as the drivers increasingly do more of the heavy lifting), these interfaces can't be used as pros/cons towards a specific hardware design. Basically, you're gonna have to code directly to the specific hardware anyways for the time being, and at a high level they are all similar. Thus there's no need to bring the hardware-specific programming languages into the equation at all, except to note as Timothy did that they are all vendor-specific :)
 
What I'm talking about is systems for which the end-user doesn't even ask for 3D 'acceleration' at all.
They aren't asking for those extra cores either, which they need just as much as 3D hardware (which is to say close to not at all).
 
The biggest problem with IGPs is that they have to use system memory. Software rendering has the same problem. AGP texturing and TurboCache etc already show that even the fastest GPUs screech to a halt as soon as they have to use low-bandwidth system memory.
You also see that laptops with faster graphics chips get dedicated graphics ram, which ofcourse drives up prices.

So if you want to get a cheap solution, you can't use fast memory. In which case I don't think software rendering can ever win from a dedicated IGP. The cost of a modern IGP is negligible. Just compare prices of the same motherboards with and without IGP. On the other hand, the IGP relieves the CPU of rendering tasks, and does this very efficiently. Running SwiftShader on a laptop is not good for battery life, and will also generate far more heat, making the laptop noisy even when just doing basic desktop operations.

You also have to realize that no IGP means that the CPU will have to take over ALL rendering. Especially with Vista Aero that means you'll be doing texturing operations all the time (with filtering and alphablending for many effects). My laptop with a X3100 has no problem doing that. I think SwiftShader would not only make the desktop run sluggisly, but also bog down the CPU as a whole. I think it would be similar to how Windows XP runs before you have videodrivers installed, and it runs with the basic VESA driver, and the CPU has to do all window drawing.

So will we drop IGPs in favour of software rendering? No, I don't think so. IGPs are more cost-effective and deliver a better user experience.
In fact, IGPs have just started to pick up speed with the AMD 780G. They are starting to make use of the fact that DDR2 800 and 1066 are now very affordable, and can go in standard budget PCs. And DDR3 will increase the bandwidth even further. So AMD just made a leap in IGP performance, and I'm quite sure that nVidia will follow. Intel will probably get in the game aswell, when Larrabee is a success, and their first IGP based on Larrabee is introduced (I don't expect the 4000-series to make that leap yet, they're too similar to the 3000-series).
 
Last edited by a moderator:
Back
Top