Some comments from Nvnews...

I just read this over at the Nvnews forum. And while I understand that it is the "Nvnews Forum"... I still found it pretty dishartening that the Nvidian propaganda machine is already at work. I suppose the "secret" PDF docs from Nvidia will be flying arund in no time.

Chanoth wrote.

1. How many texture samples can each pixel pipeline calculate?

2. Is each pixel pipeline capable of working in full 128-bit color with no performance hit (i.e. all calculations done in 32-bit floating point per channel), or are they all done in 16-bits per channel (or lower, though I think 16-bit fp would be the minimum realistically possible) with multiple pipelines working together?

One quick side note: I am still of the opinion that it is pointless to have more than 64-bit floating-point color, though it may be good to have the 128-bit support for non-color calculations (i.e. bump maps). It is interesting that the R300 should be able to do 64-bit color a little bit faster than today's video cards do 32-bit color, which is something we did not see in the 16bit->32bit transition (and it is pretty exciting).

3. Apparently the architecture is 8x1. Is this really better than the 4x2 pipelines we have today? I suppose it may be better for DOOM3, but is it better for games into the future that will use more advanced shaders?

And one final thing.

The limits in the pixel and vertex shaders are most certainly limiting to developers. While it is true that games won't reach those limits anytime soon, they are a barrier that will make games that use DX9 as the min-spec more limited than they otherwise could be.

If nVidia's NV30 has support for completely virtualized programs (no size limitations...other than those imposed by video card memory/performance), then that would be grounds enough for me to consider the NV30 as the only true DX9 part. But, it is seeming less likely that we will have unlimited program sizes just yet

I have SEVERAL problems with statments made here. While everyone does indeed have the right to their own opinion. When You start talking about *limitations* In brand brand new Pixel and vertex shaders that are Far beyond what anyone else has.. With the implication... that the Nv30 or whatever will be totally different.... Without having any information of the subject... I get irritated.

even going as far as suggesting that 8x1 pipes are ONLY designed for DoomIII but will be limited in the future... Without knowing any of the internal workings of teh card.

I could make more comments than this, but I would appricaite it if some of you experts could actually take a stand and point out if there are any true merrits to these comments.
 
I'll keep it short this time.

ATI advertised with the slogan, "World Without Limits."

Don't I have a right to be dissapointed when there are indeed limits in their new hardware?

Anyway, the rest of that is mostly questions to which I don't know the answers. You're blowing things way out of proportion.
 
Chalnoth,

Are you for real? ATI's marketing prgram actually had you convinced that their new hardware would, in practical purposes, be without limits?

My opinion of you has sunk to a new low. :rolleyes:
 
From nVidia's GF4 press release:

the GeForce4 MX GPU is the most feature-rich, cost-effective, highly integrated GPU available for the mainstream market.

If nVidia's marketing people tell the truth, why is my friend's GF4 MX slower than my GF3 ti200 and doesn't support DX8 in hardware?

Maybe because marketing people lie?

Nite_Hawk
 
Chalnoth said:
I'll keep it short this time.

ATI advertised with the slogan, "World Without Limits."

Don't I have a right to be dissapointed when there are indeed limits in their new hardware?

You're joking, right?
 
Chalnoth said:
I'll keep it short this time.

ATI advertised with the slogan, "World Without Limits."

Don't I have a right to be dissapointed when there are indeed limits in their new hardware?

They said World Without Limits, not Hardware Without Limits :)
 
Oh brother. By that standard, it is impossible to satisfy the marketing statement since everything is ultimately limited by the laws of physics.

NVidia may ship a part that allows arbitrarily long execution (e.g. thousands or millions of clocks per pixel) but that won't mean one iota for games and while it may have some application in pathological renderman shaders, for the vast majority of cases, it won't matter if the limit is 2000 or 10,000 clocks.

NVidia should focus more on execution speed of these shaders and length on allowing really long and slow/inefficient shaders to be written.

For most of all, game developers will be targeting the non-pathological cases and we will want maximum performance of these shaders.
 
I agree, this is a huge disappointment.

When I first saw "World Without Limits", I was throbbing with anticipation. "At last!" I thought: the borders between the countries would open, the racial and ethnical hatred would be eliminated, poverty, hunger and illiteracy would disappear. An era of unprecedented enlightenment would engulf the world and mankind would finaly achieve its full potential.

And all we got was a graphics chip!? Never in my life have I been so crushed with bitter disappointment. ATi, how could you mislead me so?
 
i do find it very odd that the Chalnoth does seem to know his stuff but it im suprised how biased he is.

Geeforcer, I couldn't agree more mate :LOL:
 
Jazz said:
i do find it very odd that the Chalnoth does seem to know his stuff but it im suprised how biased he is.

Geeforcer, I couldn't agree more mate :LOL:
Eh, he doesnt even know his stuff.
 
The world would be a better place if we´d all learn to be more objective, and that doesn´t exclude 3D. There we´d really see a world without limits.

With technology taking major leaps like we saw today I couldn´t care less who makes it. It´s an exiting time to be a gamer.
 
if the chip did everything and on top of that, be able to fix your breakfast and make your bed, we'd still hear complaints about the eggs being slightly over-cooked on one side and a small wrinkle left in the sheets with no mint on the pillow..

anyone care to explain what (useful) purpose brand loyalty serves?
 
anyone care to explain what (useful) purpose brand loyalty serves?

Same purpose as with any loyalty. In retrospect look at politics, sports, music, art or plain and simple anything that allows fanatical followers or believers.
 
I have seen biased in many forms but I have to say this takes the cake.


I wonder if there is an antidote for this virus called fanboyism{is that even a word}.
 

1. How many texture samples can each pixel pipeline calculate?


Do you mean output pixels or texture fetches? Output pixels is still unclear although ATI's reference to multiple rendering targets means at least 2, possibly more. Texture fetches is 16.


2. Is each pixel pipeline capable of working in full 128-bit color with no performance hit (i.e. all calculations done in 32-bit floating point per channel), or are they all done in 16-bits per channel (or lower, though I think 16-bit fp would be the minimum realistically possible) with multiple pipelines working together?


I would certainly expect that to be the case. Presumably each operation on 128 bits of data takes 1 cycle. If it were doing this in two cycles ala P4 SSE2 then you would expect it to be considerably slower in Doom3. Given the talk of high precision and color range I think it's pretty clear that they're referring to standard 32bit floats per component. This also works very nicely for scalar SIMD operations.


One quick side note: I am still of the opinion that it is pointless to have more than 64-bit floating-point color, though it may be good to have the 128-bit support for non-color calculations (i.e. bump maps).


Since everything is done internally at 128 bits, it makes perfect sense to be able to output this format for multipass shaders. This means you can have arbitrarily long shaders with absolutely no loss in precision between passes. Think renderman.


3. Apparently the architecture is 8x1. Is this really better than the 4x2 pipelines we have today? I suppose it may be better for DOOM3, but is it better for games into the future that will use more advanced shaders?


Isn't it obvious? At worst an 8x1 architecture is no slower than a 4x2. At best it's twice as fast. Given that not all tri's are multitextured with an even number of textures, you gain a huge advantage by having 8 pipes. It's also clearly better for shaders since you have double the functional units at the same number of operations/cycle.
 
Tongue-in-cheek remark recognition seems to be at an all time low, and some of you respondants are taking yourselves far too seriously.

Either that, or Chalnoth has indeed suffered some sort of emotional crisis and I'm way off base. :)
 
CMKRNL said:
Do you mean output pixels or texture fetches? Output pixels is still unclear although ATI's reference to multiple rendering targets means at least 2, possibly more. Texture fetches is 16.

I meant texture fetches, assuming I know what you're talking about. If the R300 is indeed capable of taking up to 16 samples from a single texture per pixel pipeline per clock for use in bilinear, trilinear, or anisotropic, then that is very impressive. Granted, some power might seem to be wasted, as you'd think it would be able to do, at the very least, two trilinear texture samples per clock with a relatively small increase in transistor count...though I'm not sure that would help out anisotropic performance any.

I would certainly expect that to be the case. Presumably each operation on 128 bits of data takes 1 cycle. If it were doing this in two cycles ala P4 SSE2 then you would expect it to be considerably slower in Doom3. Given the talk of high precision and color range I think it's pretty clear that they're referring to standard 32bit floats per component. This also works very nicely for scalar SIMD operations.

Eight pixel pipelines capable of 16 texture fetches, all operating in 128-bit? It just seems to me that that would need more transistors than the Radeon offers.

Additionally, I don't believe there is any problem with using "merely" 64-bit fp color, even for multipass, as long as the internal processing is all carefully done to minimize as much error as possible (i.e. use internal 72-bit or so processing where needed). After all, using floating point virtually eliminates any multiplication problems, and since it is unlikely that the R300 has any more than a 10-bit DAC, I don't see any problem with only using 64-bit color (64-bit should be good enough for 12-bit DACs).

I really don't believe 128-bit color will be used much at all, except in certain special cases, such as with high-resolution normal maps for use with bump mapping or displacement mapping.

But, I do have to admit that the R300 has surprised me on a number of fronts. I will admit that it's possible that each pipeline is capable of performing 128-bit calculations without any fillrate hit.

Oh, and one last thing. Using 128-bit color in the framebuffer would make it very hard to use much of any FSAA at high resolutions.

Isn't it obvious? At worst an 8x1 architecture is no slower than a 4x2. At best it's twice as fast. Given that not all tri's are multitextured with an even number of textures, you gain a huge advantage by having 8 pipes. It's also clearly better for shaders since you have double the functional units at the same number of operations/cycle.

There may be efficiency issues involved here, and there's also the fact that complex shaders (lots of textures) won't be much worse on a 4x2 pipeline.

But, I'm just sort of thinking out loud here. The R300's performance is incredible, and there's no reason to believe that any 4x2 pipeline card will oust it anytime soon, if ever.
 
Chalnoth said:
I'll keep it short this time.

ATI advertised with the slogan, "World Without Limits."

Don't I have a right to be dissapointed when there are indeed limits in their new hardware?

Anyway, the rest of that is mostly questions to which I don't know the answers. You're blowing things way out of proportion.

I have just lost all respect for you, and I am sure alot of people on these boards feel the same.

Dissapointed with the R300? The R300 is the biggest leap in performance we have seen since the jump from Voodoo1 to Voodoo2, and packed with features as well. Dissapointed huh? :rolleyes:
 
Back
Top