♪Anode said:Doing fp16 at full speed and fp32 at half is as good a choice as ati doing 24 bits if not better. It may not seem so at now but it will pay off as time goes on.
I think you forgot the "because [...]" at the end??
♪Anode said:Doing fp16 at full speed and fp32 at half is as good a choice as ati doing 24 bits if not better. It may not seem so at now but it will pay off as time goes on.
I'd agree to that, but...♪Anode said:IMO supporting full FP32 would be the way to go in the future. FP 24 is mostly a stop gap thing which ATI did so as to reclaim their performance crown which worked for them at this point of time. But I dont doubt for a second that they would be going FP32 for the future cards.
Ante P said:♪Anode said:Doing fp16 at full speed and fp32 at half is as good a choice as ati doing 24 bits if not better. It may not seem so at now but it will pay off as time goes on.
I think you forgot the "because [...]" at the end??
I'd agree to that, but...
The problem is having FP16 and slow FP32 today won't help you, as games in the foreseeable future won't benefit at all from the increased precision you get with FP32 over FP24 - but OTOH FP16 *might* not be enough always. And by the time FP32 will be useful (when shaders are much longer) it also won't benefit the FX 5900, as it will be always too slow (and it won't support PS 3.0 etc.). So, for now, fp24 just seems to be the better choice, plain and simple. If you need more precision in the future, you'll just do this with your future product, it doesn't make sense to add it now.
(That said, FP32 might be useful for those much touted "pixar-in-a-box" type of applications, haven't heard much of that lately however.)
♪Anode said:Ante P said:♪Anode said:Doing fp16 at full speed and fp32 at half is as good a choice as ati doing 24 bits if not better. It may not seem so at now but it will pay off as time goes on.
I think you forgot the "because [...]" at the end??
For one thing 32 bit floating point is a standard for IEEE single precision floating point and most of the code done for older apps which was meant to be done on the cpu would be written using that.
I have yet to see or hear about any FP32 shader program that couldn't run perfectly fine with FP24 precision. I don't doubt that someone could come up with a contrived case, but I'm talking about something interesting and/or useful.♪Anode said:For one thing 32 bit floating point is a standard for IEEE single precision floating point and most of the code done for older apps which was meant to be done on the cpu would be written using that.
How, exactly?Workstation apps come to mind where fp32 help you.
But wouldn't it make more sense to release a card that can run FP24 precision shaders at useable speeds today, then to release one that runs FP32 shaders at unusable speeds today in the hopes that tomorrow's hardware will make them practical?So it makes sense to expose 32 bits in the first generation of cards so that devs have something to work at. By the time the game developers are done and games hit the market there will be faster cards and what they developed on would come to fruit.
Who needs 32 bit color? 16 bit is fine. Beyond that, who needs more than 640k? nobody will ever need more than that.GraphixViolence said:I have yet to see or hear about any FP32 shader program that couldn't run perfectly fine with FP24 precision. I don't doubt that someone could come up with a contrived case, but I'm talking about something interesting and/or useful.
What about this demo from Humus?? Zoom in on the left part of fractal on a Radeon and you'll see what fp24 does. If you are not convinced how much better fp32 can look there I'll provide some screenshots from reference rasterizer.GraphixViolence said:I have yet to see or hear about any FP32 shader program that couldn't run perfectly fine with FP24 precision. I don't doubt that someone could come up with a contrived case, but I'm talking about something interesting and/or useful.
Um, that's a bit of an oversimplification, don't you think? If I show you two images, one rendered at 32 bpp and one rendered at 16 bpp, the difference will be obvious. If I did the same with FP32 vs. FP24, it would not. Of course, you could come up with contrived cases where the reverse was true (i.e. 16 bpp looks the same as 32 bpp, or FP32 looks much better than FP24), but what's the point? We're talking about useful products here, after all.RussSchultz said:Who needs 32 bit color? 16 bit is fine. Beyond that, who needs more than 640k? nobody will ever need more than that.GraphixViolence said:I have yet to see or hear about any FP32 shader program that couldn't run perfectly fine with FP24 precision. I don't doubt that someone could come up with a contrived case, but I'm talking about something interesting and/or useful.
(in other words: in computers, more is never enough)
Ante P said:Excuse my ignorance but how would that ever apply to a "gaming videocard"?
But wouldn't it make more sense to release a card that can run FP24 precision shaders at useable speeds today, then to release one that runs FP32 shaders at unusable speeds today in the hopes that tomorrow's hardware will make them practical?
Yes, I think what you've said is a bit of an oversimplification.GraphixViolence said:Um, that's a bit of an oversimplification, don't you think?
...
If I show you two images, one rendered at 32 bpp and one rendered at 16 bpp, the difference will be obvious. If I did the same with FP32 vs. FP24, it would not.
Neat... but that's about the best example of a contrived case I could think of. No matter how much precision you have, you're going to eventually run out once you zoom far enough into a Mandelbrot set. FP32 would just let you get a little deeper.MDolenc said:What about this demo from Humus?? Zoom in on the left part of fractal on a Radeon and you'll see what fp24 does. If you are not convinced how much better fp32 can look there I'll provide some screenshots from reference rasterizer.
It's a matter of diminishing returns. It would be nice if we could all have 1000-CPU render-farms on our desktops, but given the relative cost and relative quality difference vs. a high-end gaming GPU, it doesn't make sense. So the question is, what's good enough given the cost/performance/quality trade-offs of existing technology? Or more correctly, what is the ideal balance?RussSchultz said:Why do we need FP24 when 32 bits borders on the realm of our resolvable limit? (and exceeds most LCDs)?
Because it wasn't enough for some effects--cummulative errors begin to creep in and also because we're not using simple additive math anymore in our shaders.