Ah, see, you start getting it.
It's also up to me, if you get it
I could mention your name to some managers, if you tell me your true name.
Okay
Yes, but internet, educational applications and CAD are not asking for a new raytracer. Well, I'm sure there's a market for it of course, but I can't possibly compete against professional tools, that already have run-time SIMD compilation technology. If I did it wouldn't be a hobby project any more. But I'll keep it in mind for the future...
Do they, though?
I have not heard of any offline renderer that has runtime compilation.
And if you ask me, internet, educational apps and CAD ask for hardware acceleration
If the internet didn't require hardware acceleration, I'd probably be a rich man now with my Java engine
Sure, but it's not because these cheap graphics cards are available that people immediately 'upgrade'. Or would you replace a Geforce 4 Ti with a Geforce FX 5200?
If they want SM2.0 they probably will. Besides, if they can afford a GF4Ti, they can also afford a decent SM2.0 card.
Furthermore, I'll be SM 4.0 compatible long before the hardware is affordable. Not only amateurs love that...
You said the same thing about SM3.0, but so far I don't think you've even implemented SM2.0 and lower completely.
What are you trying to say? This is my fourth generation renderer.
I recall you saying that it was your first renderer. Perhaps you've rewritten it a few times, but everyone does that
Either way it sells. They didn't include it in Unreal Tournament for nothing. And I would be more than happy to sell it for a fraction of their price.
Dunno, I tried the patch for Medal of Honor, and it was completely unplayable on a P4 2.4 GHz. Funny, that game runs on an age-old GF2 in 1024x768 with little trouble. I'm sure it runs on pretty much any onboard GPU too. So yes, as far as I'm concerned, they did include it for nothing. I don't think you can find a system with a CPU that is fast enough, coupled with a GPU that is slow enough, to make Pixomatic a better alternative than the GPU.
And I don't see how your renderer will improve that, because you simply have this barrier of limitations in memory bandwidth and processing power. After all, that is why we have hardware acceleration.
Refrast and your Java renderer, although respectable, both don't use the CPU optimally. They are no option for a fallback. People who design their software to run on all x86 systems won't choose refrast.
Your renderer doesn't use the CPU optimally either, since you try to shoehorn it into a hardware API, that was the point. And until you can render a Q3 level at a respectable framerate I don't think your renderer is an option for a fallback either
You would have given up long ago, wouldn't you?
Well yes. Unless someone pays me to optimize a software renderer, I see no reason to continue working on one. I am more interested in shading and animation now, and the things I am doing, wouldn't stand a chance in software.
The magic algorithm might not exist, but I'm getting close, and results will be satisfying, to me...
Make a demo again then, and this time, add some skinned characters to a Q3 level. And do per-pixel lighting instead of that texture*lightmap thing.
Then let's see what framerates you get. After all, if you implement D3D9 shaders, you might aswell use them.