Some comments from Nvnews...

I'm not completely disappointed. I think its performance is outstanding, and it definitely appears that ATI has done an excellent job with both FSAA and anisotropic filtering.

The R300 is, without a doubt, an excellent buy (assuming it doesn't have excess heat or related issues...but I'm sure most enthusiasts will find a way around any such problems...).

I just think that ATI could have done a little bit more for the future of 3D rendering.

Anyway, I can't wait to see what happens with this card once people on this and other forums pick it apart...that's when it gets really interesting!
 
I just think that ATI could have done a little bit more for the future of 3D rendering.

You are very frustrating..I have no other words to desribe you...

Lets see the R300 supports

1) Pixel Shader 2.0
2) VS 2.0
3) Hardware DVD Decoding with HDTV support
4) Scalable up to 256 chips
5) Twice as many transistors as a Geforce 4 4600 on the same process
6) AGP 8X
7) Improved Z-rejection (Hyperz III)
8) 8 Pixel Pipelines with a 256 bit memory bus
9) Floating point color
10) No it doesn't cure cancer

I have no Idea where your comment is coming from...pathetic.
 
compare the r300 to the r200 and i think you will really see how big the leap they made was ...
 

I meant texture fetches, assuming I know what you're talking about. If the R300 is indeed capable of taking up to 16 samples from a single texture per pixel pipeline per clock for use in bilinear, trilinear, or anisotropic, then that is very impressive.


Well it can do 16 texld's in a single pass. You can assign all 16 to the same texture if you wanted to. Whether it can do this in a single clock on 8 pipelines simultaneously is probably not even technically possible given the memory bandwidth, but given that it's coming off the same cached data, who knows?


Eight pixel pipelines capable of 16 texture fetches, all operating in 128-bit? It just seems to me that that would need more transistors than the Radeon offers.


It seems that's what ATI is telling us.


Additionally, I don't believe there is any problem with using "merely" 64-bit fp color, even for multipass, as long as the internal processing is all carefully done to minimize as much error as possible


Well if all your calculations are done on 128bit FP color, then you run into serious precision issues in multi-pass shader implementations if you can only output 64 bit FP color. By maintaining the same precision as internal, your shaders can be arbitrarily long without fear of lossiness based on number of passes.


I really don't believe 128-bit color will be used much at all, except in certain special cases, such as with high-resolution normal maps for use with bump mapping or displacement mapping.


Huh? For displacement mapping?


Oh, and one last thing. Using 128-bit color in the framebuffer would make it very hard to use much of any FSAA at high resolutions.


You mean from a memory footprint view? MSAA takes care of that. In any case, we're not talking about displayable 128bit color. We're simply talking about 128bit output for intermediate shader passes. Presumably there is some mechanism for converting this to 24 or 30 bit for display.


There may be efficiency issues involved here, and there's also the fact that complex shaders (lots of textures) won't be much worse on a 4x2 pipeline.


But they will be. All of the existing 4x2 architectures have much lower memory bandwidth as well as fewer texture fetches allowed per pass. There are going to be a lot more memory fetch stalls on an R200 or NV25. There's no question in my mind that the R300 should technically have the most efficient shaders of all existing solutions.
 
Nite_Hawk said:
Maybe because marketing people lie?
Hey! Watch it! :LOL:

Na, you're right, anyway. ;)

I too find it rather amusing how some people are looking for "limits" (and such artificial limits, too) just because an ad told them that a piece of hardware doesn't have any. Is it naïvité, or are we talking blind fanboydom, here?

ta,
-Sascha.rb
 
...it is pointless to have more than 64-bit floating-point color, though it may be good to have the 128-bit support...

A 3rd hand rumour is that an upcomming product will support "only" 64-bit fp - although it's DAC will be >10:10:10:2 RGBA.
 
Oh, and one last thing. Using 128-bit color in the framebuffer would make it very hard to use much of any FSAA at high resolutions.


You mean from a memory footprint view? MSAA takes care of that. In any case, we're not talking about displayable 128bit color. We're simply talking about 128bit output for intermediate shader passes. Presumably there is some mechanism for converting this to 24 or 30 bit for display.

Remember as well that there is frame buffer compression as well (well, according to some of the previews).
 
Chalnoth said:
I'll keep it short this time.

ATI advertised with the slogan, "World Without Limits."

Don't I have a right to be dissapointed when there are indeed limits in their new hardware?

The slogan "Exerience the World With Limits" doesn't have quite the same effect though, does it? ;)
 
stevem said:
...it is pointless to have more than 64-bit floating-point color, though it may be good to have the 128-bit support...

A 3rd hand rumour is that an upcomming product will support "only" 64-bit fp - although it's DAC will be >10:10:10:2 RGBA.

If DX9 compliance requires "floating point" it would be quite odd if they didn't specify how many bits should be spent on mantissa and exponent, respectively. I asked this before, but didn't get any reply, but - is there anyone who knows the bit distribution of these floating point formats?

There is no reason per se why they would use IEEE.

Entropy
 
CMKRNL said:

I meant texture fetches, assuming I know what you're talking about. If the R300 is indeed capable of taking up to 16 samples from a single texture per pixel pipeline per clock for use in bilinear, trilinear, or anisotropic, then that is very impressive.


Well it can do 16 texld's in a single pass. You can assign all 16 to the same texture if you wanted to. Whether it can do this in a single clock on 8 pipelines simultaneously is probably not even technically possible given the memory bandwidth, but given that it's coming off the same cached data, who knows?
I got the impression that Chalnoth wasn't talking about texlds, but texel fetches per clock.
 
Remember as well that there is frame buffer compression as well (well, according to some of the previews).

Dave, that's true. However, I really don't think chalnoths original point is an issue because I can't see the final output being in 64 or 128 bit displayable format. At best the DAC will output 10:10:10:2. If it indeed had to fetch 64 or 128 bit samples for FSAA then yes, it would take a huge penalty in both footprint and performance. However, based on the numbers we're seeing with 4xFSAA on, I can't imagine this is the case. Additionally, if the chip did feature 64bit displayable color, I'm sure this would have been a huge feature that ATI would be touting.


I got the impression that Chalnoth wasn't talking about texlds, but texel fetches per clock.


Xmas, isn't this a function of memory bandwidth and memory subsystem architecture? It's impossible for anyone outside of ATI's engineers to know the exact answer to this. Data is continously being prefetched for the shaders and may be reduced by various early rejection systems. Additionally, type + size of caches, cache line sizes, latencies and the level of concurrency between their load/store unit and execution units will all determine what the final performance will be. (Not to mention other factors such as the program itself and dependencies). Unfortunately I don't think we'll ever get the details on how this actually works. GPU designers seem to be pretty secretive about even simple details like how many caches there are, how large they are etc.
 
stevem said:
...it is pointless to have more than 64-bit floating-point color, though it may be good to have the 128-bit support...

A 3rd hand rumour is that an upcomming product will support "only" 64-bit fp - although it's DAC will be >10:10:10:2 RGBA.

How can a DAC palette contain an alpha component? Surely it's just a straight 30-bits. :-?

Geeforcer said:
I agree, this is a huge disappointment.

When I first saw "World Without Limits", I was throbbing with anticipation. "At last!" I thought: the borders between the countries would open, the racial and ethnical hatred would be eliminated, poverty, hunger and illiteracy would disappear. An era of unprecedented enlightenment would engulf the world and mankind would finaly achieve its full potential.

And all we got was a graphics chip!? Never in my life have I been so crushed with bitter disappointment. ATi, how could you mislead me so?

LOL. :D

MuFu.
 
CMKRNL said:
Xmas, isn't this a function of memory bandwidth and memory subsystem architecture? It's impossible for anyone outside of ATI's engineers to know the exact answer to this. Data is continously being prefetched for the shaders and may be reduced by various early rejection systems. Additionally, type + size of caches, cache line sizes, latencies and the level of concurrency between their load/store unit and execution units will all determine what the final performance will be. Unfortunately I don't think we'll ever get the details on how this actually works. GPU designers seem to be pretty secretive about even simple details like how many caches there are, how large they are etc.
I think we're talking different things here.
With texel fetches per clock i mean the number of texels the texture units can read from cache and filter in one clock cycle, independent of any bandwidth/latency constraints.
E.g. it is known that Parhelia's texture units can fetch 4 texels per clock each (trilinear filtering requires two texture units), which results in 64 texels per clock total (4 pipelines * 4 TUs * 4 texel/clock).
GeForce texture units are able to do one trilinear filtered sample per clock, that's 8 texel fetches per clock per TU.
 
Well, i don't know what this thread is doing over here. You want to comment it, go on NVnews. You think he is a fanboy, ok, then don't quote him and don't act like a fanboy.

It's not because almost everybody think that it's a GREAT card than some can't see some drawbacks or feel that it could have been better :-?
 
Evil,

You are missing the point here. Chalnoth Has made some comments that go beyond personal opinion. I simply put it here so that tech savy people could respond to some of these baeless *accusations* and *insinuations*. Not in a fan-boy spirit, but for the sake of honest representation of a new product.

As one of the leaders of one of the biggest Nvidia sites, you cant post completely baseless accusations of inferiority, holding the industry back, and calling into question key technologies of a new product without hard data to back it up. Hundreds nay thousands of people form their opinions, or are influenced by Type and chalnoth. They are considered Technical experts, knowing what they are talking about. While virtually none of the people at Nvnews are going to read this i felt it would be a good place to get to the bottom of the technical merits of his post. Perhaps influenceing him and others to be more careful when posting baseless misinformation.

This runs allong the exact lines of the infamous Kyro document, and clearly shows a pattern from the leaders of the Nvidia camp. While no opinions are likely to be changed, at least the True aspects of the technology can be discussed in no uncertain terms.
 
I am inclined to agree with Hellbinder almost intirely. The negative spin that Chalnoth is attempting to make on the Radeon 9700 is silly. Out right silly claims of "ATI holding the 3D industry back" and nit picking a clearly the best 3D product available on the market (well almost.) is just shameless bashing of ATI ..... no two ways about it. While no critical words for nvidia at all, none, just how great the NV30 will be and that it will absolutely beat the Radeon 9700 without a doubt it will be more advanced in every aspect totally on and on but with no real evidence at all to back the claim...... none and on top of that how much better nvidia is then ATI in general .

WTF is that? It isn't as simple as "fanboyism" either these guys (Yes I think there is more then one.) are educated enough to know that ATI is a good company with excellent products yet continue to poke and prod at ATI even when they know they are wrong!!! Come on it is unbelievable....

BTW Chalnoth I am not convinced yet the nv30 will be better then the Radeon 9700 yet or that nvidia is superior to ATI in every way. I guess you will just have to try harder.
 
I didn't miss the point, but you just put the quotation implies some (biased?) response like: "he is completly biased".

Now, i do think that he has the right to say that there is some limitations, and he can say that these limitation disapoint him.

I clearly agree with you, and others that we don't know anything on NV30 (on functionnality) and he can't say NV30>R300 in function.

What i don't like is that by quoting this you should have know that it would finish by: What a fanboy! Which are at least the first responses of this thread.

Finally, he has said that R300 is a great product, thus he is a little disapointed in some features, he has the right to do so, even if it's the best possible product right now.

For my part, Ati has done the best consummer card ever till now. Perhaps it has some drawbacks (and i hope that will be the highlights of B3D) and some really nice future (and i hope that will be the highlights of B3D also).
 
Yes, bashing R300 for not being innovative enough is stupid. Here we have the biggest leap in realtime 3d rendering technology in years, and a few nVidiots are whining about any irrelevant point they can find, just because the technological leap isn't coming from nVidia.

I could respect nV fansites if they simply stuck to "NV30 will be faster" - as it surely will be, seeing as how it'll have the advantage of many more months of development - but to moan about the "limitations" of the R300 is just silly.
 
Back
Top