Fuad of The Inq, trying to explain NV40 ... AGAIN

LOL man, what a load of bs, now people are claiming 15% performance lead?

OMG where are the facts? no facts and a load of conclutions... WTF

Sorry all this just looks so stupid to me.
 
Luminescent said:
If the Cebit rumors are correct and R420 is indeed 5-15% slower than NV40, assuming it has 12 pixel pipelines (in comparison to NV40's hypothetical 16)

yes, but what if 5-15% slower R420 costs 15-25% less then NV40?
does anyone have any idea on possible performace/price ratio? (prolly not, but it would be fun to speculate about that aspect also ;) )
 
Luminescent said:
Keep in mind, these threads are for entertainment purposes, if 3D tech is your hobby. These rumors should be taken with a grain of silicon.
I like that. Do you mind if i take it as a signature? 8)
 
madshi said:
Here's a better translation for ya non-german speakers:
Finishing this year's CeBIT report we will tell you about some interesting rumors, which we were hearing during those 4 days:

- The NV40 will be faster than the R420. It can be expected that the R420 is between 15% and 5% slower than its NVidia competitor.

- There will be two models of the R420: A "Pro" card with 500Mhz chip and 475Mhz memory clock and a card called „Nitro“ with 600Mhz chip and 550Mhz memory clock.

Thanks for the translation :)

Assuming these two rumours are true, in what regards would the R420 be slower?

Would it be in a clock by clock comparison, or just high-end vs high-end?
(Yes, they are not exclusive)

The reason I'm asking is that if the NV40 is 5%-15% faster than the nitro, it would seem like ATI would have quite some work ahead of them.
Assuming the pro would go into the regular high-end price-point with the nitro being the extreme-end that enthusiasts have been asking for.

Anyone care to speculate (and thus educate me) on this topic?
 
volt said:
Yet we do not know whether or not the benchmarks in question were forced (via forceware hah) to run in brilinear -- but would NV really do that *if* NV40 has *that* much RAW power?

But...if nV40 doesn't have "that much" raw power, it would certainly make as much sense for nVidia as it did with nV3x, wouldn't it? Sure made sense for them with nV3x, right? I can see them hedging their bets without a lot of difficulty.

Really not trying to be a party-pooper here, but I just don't see much of anything in these rumors that would cause me to accuse them of being "confirmed" or "verified." Hopefully, we haven't degraded to the point where the number of times the same rumor is regurgitated now stands as its "proof"...? Besides, the rumors are beginning to contradict each other, seems to me...a sure sign of non-confirmation...;)
 
WaltC said:
volt said:
Yet we do not know whether or not the benchmarks in question were forced (via forceware hah) to run in brilinear -- but would NV really do that *if* NV40 has *that* much RAW power?

But...if nV40 doesn't have "that much" raw power, it would certainly make as much sense for nVidia as it did with nV3x, wouldn't it? Sure made sense for them with nV3x, right? I can see them hedging their bets without a lot of difficulty.

Really not trying to be a party-pooper here, but I just don't see much of anything in these rumors that would cause me to accuse them of being "confirmed" or "verified." Hopefully, we haven't degraded to the point where the number of times the same rumor is regurgitated now stands as its "proof"...? Besides, the rumors are beginning to contradict each other, seems to me...a sure sign of non-confirmation...;)

Yes Walt, generally it is a sign of non-confirmation. But not in this case :LOL: :rolleyes: !
 
WaltC said:
volt said:
Yet we do not know whether or not the benchmarks in question were forced (via forceware hah) to run in brilinear -- but would NV really do that *if* NV40 has *that* much RAW power?

But...if nV40 doesn't have "that much" raw power, it would certainly make as much sense for nVidia as it did with nV3x, wouldn't it? Sure made sense for them with nV3x, right? I can see them hedging their bets without a lot of difficulty.

Really not trying to be a party-pooper here, but I just don't see much of anything in these rumors that would cause me to accuse them of being "confirmed" or "verified." Hopefully, we haven't degraded to the point where the number of times the same rumor is regurgitated now stands as its "proof"...? Besides, the rumors are beginning to contradict each other, seems to me...a sure sign of non-confirmation...;)

correct :)

Now, can you imagine how would our world look like without Inq. elusive stories? Certainly much clearer, though doesn't mean more accurate. I just think Faud's rumor-changing-attitude is really the problem here. We always had rumors flying, but daily contradicting speculations coming from a large tabloid are just unhealthy :?
 
NV40 is 16x1, the VS abuse rumors are false, no one's sure what the texture filtering was in the 3DMark tests, R420 is eight "extreme" pipelines, nobody knows what extreme means, no PS3.0, VS3.0 is unsure but almost certainly not, NV40 will have rotated-grid multisampling, no one's sure about a blur filter like Quincunx, R420 will have "new" AA, which probably means 8x support, R420 will be clocked higher, R420 will launch second but hit retail first, both will have GDDR3, I'll go out on a limb and say that R420 will have higher memory speeds than NV40.

Oh, and NV40's status as AGP or PEG native is still unknown.

There. Enjoy.

(Oh yeah--I didn't mention PS performance. I could write another one of these things on that. Goddamn cat... ;) )
 
The Baron said:
NV40 is 16x1, the VS abuse rumors are false, no one's sure what the texture filtering was in the 3DMark tests, R420 is eight "extreme" pipelines, nobody knows what extreme means, no PS3.0, VS3.0 is unsure but almost certainly not, NV40 will have rotated-grid multisampling, no one's sure about a blur filter like Quincunx, R420 will have "new" AA, which probably means 8x support, R420 will be clocked higher, R420 will launch second but hit retail first, both will have GDDR3, I'll go out on a limb and say that R420 will have higher memory speeds than NV40.

Oh, and NV40's status as AGP or PEG native is still unknown.

There. Enjoy.

(Oh yeah--I didn't mention PS performance. I could write another one of these things on that. Goddamn cat... ;) )

Hey...are you back from school? My floors are all clean and I'm giddy from bleach fumes...
 
madshi said:
- The NV40 will be faster than the R420. It can be expected that the R420 is between 15% and 5% slower than its NVidia competitor.

Hasn't DemoCoder posted this here as his own speculation a few days ago? These rumors are getting ridiculous :)
 
Well, I have to compliment the INQ on making it plain that they really don't understand the topic of "pixel pipelines"... Here we go again, but IF this particular revision of the INQ nV40 story pans out, there are still some interesting things here which the inquirer has written:

Quote:
We understand that the NV40 will actually only have eight physical pipelines, but these will appear act like 16 in certain games and, indeed, in 3DMark 2001 Nvidia was telling people how 3DMark01 is a very nice benchmark since they can render 16 textures per pass in it, using only eight pipelines. What Nvidia is using is the ability of the Vertex Shader model 3.0 (known as PS 3.0 or VS 3.0) where the Shader can actually render textures as a virtual pipeline but can only render them without filtering information.


16 textures per pass is congruent with an 8x2 pixel pipeline organization--that's 8 pixel pipelines which each have 2 attached TMUs, so although you could get 16 textures per clock, you are limited to a maximum of 8 pixels per clock. Strangely, the INQ comment here seems to say that they aren't quite sure whether nV40 can render 2 textures per pixel per clock, because it is 8x2, or whether it can only do 1 texture per pixel per clock because it is 8x1, in addition to the "virtual texturing" without filtering they are talking about, which is something much different from a discussion on pixel pipelines and the number of TMUs attached to each pipeline. Things like this:

Quote:
But, in modern games you always apply Bilinear, Trilinear or Anisotropic filtering to textures that you render, otherwise they will have some visual malformations. This is why, in normal games, eight pipelines will be able to render eight textures in single texturing pass and possibly even 16 if you render both textures in same pass using an 8x2 architecture


...seem to indicate that they aren't sure whether it's 8x1 or 8x2. The last sentence here seems to indicate they aren't sure which it is.

Quote:
What we understand Nvidia is doing is processing eight textures by pipelines while the other eight textures get processed by the Vertex Shader 3.0


This indicates to me that very possibly INQ doesn't fundamentally have a clear picture of the fact that pixel pipelines are designed to render pixels, and that *each pixel pipeline* incorporates Texture Management Units, of a certain and concrete number, whose specific job it is to attach textures per clock to each pixel per clock the pipeline produces for render. What they describe immediately above is a gpu which has an 8x1 pixel pipeline organization, and is also capable of PS3.0 support in hardware (I'm not sure what they mean by "virtual texture" as regards a gpu's pixel pipeline organization, as the TMUs are either physically there in each pixel pipeline or they aren't.)

Of all of the things in a gpu, the function of pixel pipelines, and the function of the texture units attached to each pipeline, is among the simplest to illustrate and understand. The picture constantly being presented with regard to nV40, though, is nothing if not opaque...

Quote:
However, we expect NV40 to deliver 16x1-like performance in Doom 3 (or should we say 16x0, which is how people call Nvidia's approach) where they render all the information except Z or colour features. This will be used by Doom III and games based on this engine.



This seems to do nothing except to underscore that the INQ really doesn't understand that you cannot have "16x1-like" performance--you either have 16x1 performance, or you don't.... What this sounds like, to me, is merely a repeat of the same type of bogus claims nVidia's made for nV3x--that nV40 is 8x1, but is capable of 16 monochrome z-pixels per clock (z-pixels are internal gpu OPs and are not rendered to screen); just like nV35/8 is 4 pixels per clock, but can do 8 monochrome z-pixels per clock internally which aren't rendered to screen.) In the case of nV3x it was hoped this would mislead people into deducing 8 pixel pipelines, which it did in some cases; and in the case of nV40, it seems to be hoped that talking about "16 monochrome z-pixels" per clock will mislead people into thinking nV40 has 16 pixel pipelines, which it seems to be doing...

It seems to me that what they are talking about now is an 8x1 organization that they have been told is "8x2" when it comes to monochrome z-pixels. Why they keep thinking that you need 16 pixel pipelines to render 16 texels per pass is beyond me, as they do seem to understand as quoted above that this is exactly what you get with 8x2, in which you get 8 pixels per clock and 16 textures per clock, two textures attached to each pixel rendered per clock. They should also be very suspicious of any "megatexel" benchmark numbers they receive which do not offer "megapixel" numbers at the same time, since you can derive an *approximation* (the numbers don't have to exactly match to be able to figure it out) of the number of pixel pipes in a gpu by taking a megapixel number and dividing it by clock rate. Can't do that with "megatexel" numbers, however, since you can have X number of texel units attached to each pixel pipeline. Hence, with supporting software, it would be possible for a 4x4 gpu (4 pixel pipelines to which 4 texel units are attached to each pipeline) to provide exactly the same "megatexel" numbers in a benchmark as you'd see running the same software with a gpu organized as 8x2 or 16x1. So, I'd advise INQ to next time insist that the source providing them with "megatexel" benchmark numbers also provide "megapixel" numbers at the same, if what they want to investigate is the pixel pipeline organization in a gpu. Suffice it to say that 16 pixel pipelines are not required to render 16 TEXELS per clock (not to forget, of course, that texels without pixels cannot be rendered at all.)

Still, while it does seem to be boiling down to an 8x1 organization, I'm content to wait and see, as it is impossible to understand who it is who has garbled the facts thus far--maybe the INQ--maybe their sources--maybe both...

great post. easy to read and understand even for non-techies such as myself 8)


The number of texture samples the chip can take in a single pass has really nothing to do with the number of TMUs. All the "textures per pass" number refers is how many samples the chip can take using loopback (ie multiple iterative cycles) before writing out the fragment (or rejecting it) to the FB. Textures "per cycle" on the other hand is what you're talking about.

correction noted
 
In order to describe the recent phenomena on message boards, silly is a serious understatement. I cannot understand or follow any kind of reasoning that is able to predict already performance figures when only half the picture of paper specs (ie just one competitor) is somewhat clearer and nothing else is known.

There's just too much agony involved for my taste; it's rather simple: new offerings get released, tested, analyzed and then users pick whichever piece of silicon fits their needs best.
 
Oops

Ailuros said:
In order to describe the recent phenomena on message boards, silly is a serious understatement. I cannot understand or follow any kind of reasoning that is able to predict already performance figures when only half the picture of paper specs (ie just one competitor) is somewhat clearer and nothing else is known.

There's just too much agony involved for my taste; it's rather simple: new offerings get released, tested, analyzed and then users pick whichever piece of silicon fits their needs best.

I don't want to believe that all these people are incapable of identifying if some pre-launch benchmark results are reliable or not ("It's CPU limited, oh no it's GPU limited, no this is ut2k4, not ut2k3, you changed the resolution, you OC'ed the CPU, different colordepth, Tuesday vs Wednesday," etc.).

All I can think of is that people are going through all this to figure out which stock to buy before April 13th to dump the week after...

...or else this is the typical b3d crowd and they love speculation.
 
Re: Oops

Kor said:
Ailuros said:
In order to describe the recent phenomena on message boards, silly is a serious understatement. I cannot understand or follow any kind of reasoning that is able to predict already performance figures when only half the picture of paper specs (ie just one competitor) is somewhat clearer and nothing else is known.

There's just too much agony involved for my taste; it's rather simple: new offerings get released, tested, analyzed and then users pick whichever piece of silicon fits their needs best.

I don't want to believe that all these people are incapable of identifying if some pre-launch benchmark results are reliable or not ("It's CPU limited, oh no it's GPU limited, no this is ut2k4, not ut2k3, you changed the resolution, you OC'ed the CPU, different colordepth, Tuesday vs Wednesday," etc.).

All I can think of is that people are going through all this to figure out which stock to buy before April 13th to dump the week after...

...or else this is the typical b3d crowd and they love speculation.

Rather (b) in the majority IMHO; and you of course forgot to add preferences for a specific IHV. Sounds usually a lot familiar to pre-election debates and it'll only get worse....
 
Back
Top