Tech-Report R300/NV30/DX9 preview

Damn it, that was the best and most balanced look through all the PR and hype thats flying around at wahts coming up. I'm just annoyed I didn't get there first!
 
DaveBaumann said:
Damn it, that was the best and most balanced look through all the PR and hype thats flying around at wahts coming up. I'm just annoyed I didn't get there first!
It would have been great ;) and it's a really good article.
 
Damage delivers a technically deep but highly readable article yet again. The man is the best at what he does.

Wavey, you can "redeem" yourself with your R300/NV30 reviews. No pressure. :D
 
Yup, good article. One thing though, he states that
Use of DDR-II type memory has the potential to double memory bandwidth from the current 20GB/s, but only time will tell how this one will play out.
which seems to indicate that he too has fallen for the DDR-II = quad bits per clock and pin myth. Unless of course he means in the long term but then comparing would be essentially useless. In the short term a ~50% increase sounds more reasonable.

Regards / ushac
 
ATI claims the R300 can address DDR-II memory types, as well. NVIDIA will only say that NV30 can use "DDR-II-like memory."


did nvidia really say ddrII like memory????can anyone confirm this?
 
Hasn't ati said something about going with more product cycles? Which implies more refreshes. I wonder if that translates into a refresh of R300 with the same chip - possible clocked higher - but with DDR-II memory when those memory modules become available? Or would that be too dense considering R9700/R9500 isn't even out yet and the upcoming R350 (I think is was scheduled for Q1/2002?) ? Maybe they could leave that to a 3:d party board maker...

Regards / ushac
 
ATI has quite a few options/features to combine
- .13 manufacturing technology
- DDRII
- DX9.1
- AMD's HyperTransport
 
ushac said:
Hasn't ati said something about going with more product cycles? Which implies more refreshes. I wonder if that translates into a refresh of R300 with the same chip - possible clocked higher - but with DDR-II memory when those memory modules become available? Or would that be too dense considering R9700/R9500 isn't even out yet and the upcoming R350 (I think is was scheduled for Q1/2002?) ? Maybe they could leave that to a 3:d party board maker...

Regards / ushac

speculation mode.

Pre launch a lot of people assumed spring 0.13micron refresh (equal numbers said ATI never refesh a product like nVidia do), but that now looks like the R350 and therefore the other stuff may be introduced into that as well.
 
Also, the author says that there is not much difference between the R300's 4channelx24bits max color precision vs the NV3x's 4channelx32bits color precision. I would agree on a general and wide-ranging basis but for specific effects the difference may matter. Matter of subjectivity I guess.
 
Reverend said:
Also, the author says that there is not much difference between the R300's 4channelx24bits max color precision vs the NV3x's 4channelx32bits color precision. I would agree on a general and wide-ranging basis but for specific effects the difference may matter. Matter of subjectivity I guess.

I have a feeling that with the launch of the NV30, Nvidia are going to show this difference in their Demos :D
 
I would agree on a general and wide-ranging basis but for specific effects the difference may matter.

I'm not so sure; it sounds at though ATi have concluded that 'thats enough' (although that could be down to a hardware trade-off). Remember that both cases are far beyond what film renderers are using so I really would be interested to see in what conditions is going to make any visible difference.
 
I guess you _could_ code a special effect that looks different/better with 128bit rendering than rendered on the same code with 96bits, but I doubt you will be able to code an effect which is not reproducable with different/proper 96bit coding. Even that movie CGIs do not use such high precision formats seems to show that there are obviously easy ways to do realistic effects using 64bits precision.

May be this time we will see a comment of an ATI engineer ("I'm not aware of any effect that can be achieved with 128bit rendering that cannot be done with our industry leading full speed eight pipeline 96bit architecture") ... well, or something like this :)
 
DaveBaumann said:
Remember that both cases are far beyond what film renderers are using so I really would be interested to see in what conditions is going to make any visible difference.

Really? I was under the impression that film renderers (odd term, but I assume you mean things like Pixar's prman) use IEEE 32-bit floats (i.e. a C float) as a minimum. I wouldn't be surprised if they did some things with doubles.

Or have I misunderstood you?
 
According to NV Pixal & ILM use 16bit Floats per component (so 64bit float) - their 64bit float even uses the same s10e5 representation ( 1 bit sign, 10 bit mantissa, 5 bit exponent) they say is used.
 
DaveBaumann said:
According to NV Pixal & ILM use 16bit Floats per component (so 64bit float) - their 64bit float even uses the same s10e5 representation ( 1 bit sign, 10 bit mantissa, 5 bit exponent) they say is used.

but wouldn't they take advantage of the 128bits and 96bits if it could be done at the same speed they are doing 64bits now ? and wouldn't that make for more realistic cgi ?
 
I could believe that for anything where size was an issue, e.g. texture maps or final render output, but I don't see why you would do internal calculation in a format that isn't native in C or on the processor.
 
Back
Top