nV40 info w/ benchmarks

Chalnoth said:
Keeping FP24 would be bad in a different way than sticking with a dated FSAA algorithm.

Yeah, different in the sense that the first doesn't matter much, if any whatsoever, on software for the foreseeable future, while the second has immediate VAST visible imperfections right now. ;)

Specifically, I think that FP24 in a future video card would place a limitation on the shaders themselves that shouldn't be there.

You haven't even been able to produce a significant example of where 24-bit fp won't have sufficient quality, so why should anyone worry about this?

Before fp24 becomes a real problem, you have already got a shader so slow no current hardware can deal with it in realtime, and in the future that will be true for no hardware a normal person is willing to spend money on (sub-$200 bracket), and stay true for QUITE a while. Years, likely, due to developer slowness and lagging adaption in low-end of PC marketspace.

That you can have some shader somewhere that gives some visible imperfection somewhere at fp24, in what way is that a problem, really? I mean, compared to the glaring banding errors of old with transparent textures and 16-bit frame buffers?

Seems to me you're desperately trying to make a problem out of something that is not a problem at all, really. By the time fp24 has *actually* become a problem, hardware will already have moved on past that point, but NOW is not that point in time. We won't get there for a long while. Hell, we don't have a single title yet built with DX9 in mind and you're worried about shader precision??? You've definitely got your priorities screwed up, dude. :)
 
That assumes that ATI had all the information about the implications of supporting FP24 vs. FP32 beforehand. To put it bluntly, decisions aren't always (or even often) made because they are the best decisions to make.

I take it you've never done any large scale program managment before Chalnoth? Because if you had you'd nevre had made that comment. People don't go spending hundred of millions without studying, in great depth, the implications of all their decisions. Even nvidia probably realised the implications of the NV30 design but probably just misjudged both ATi and the market.

And has been said the difference ATi's development and nvidia, seemingly, doing bugger all with their FSAA and completely different issues. One has an immediate impact the other is a much longer term issue.
 
Chalnoth said:
I don't think ATI thinks that. That's what I see is a problem, basically. I don't like half-assed solutions.

Then maybe you shouldn't have been pimping the nv30? :rolleyes:
 
Chalnoth said:
Regardless, I never said support of FP32 was trivial. It would take more transistors, and may require different routing due to the different proportions of different parts of the chip. I just meant that we don't have any information as to what the exact implications of supporting FP32 vs. FP24 are, in the R3xx or any chip.

On the contrary you did. Otherwise this post of yours:

Chalnoth said:
Once again, there are enough differences between the NV3x and R3xx such that there is no reason to believe that the choice in precisions is the reason for any performance discrepancies. To put it simply, we just don't know how many more transistors it would have taken to implement FP32 on the R3xx, since we don't know how many transistors the arithmetic units of the R3xx actually take up.

is pointless.
 
digitalwanderer said:
Uttar said:
No, I think Fudo just (finally) got his hands on that NVIDIA presentation ;)


Uttar
Got any slides? :|

No, I've never seen the presentation. Just one (or should I say two?) very nice guy(s) summarized it for me :)


Uttar
 
So how high is the BS-factor this time when NV claims 4x faster in D3 and 7x faster in HL2 etc...?

Let's just say, after the total and utter dissapointment that was the NV30, I'm somewhat sceptical of any miraculous performance numbers coming from their direction. ;)
 
Uttar said:
No, I think Fudo just (finally) got his hands on that NVIDIA presentation ;)

Sounds like it, yeah - exactly the same performance projections they've been throwing around recently.

What a scoop though - 8 memory chips and an "Ultra" version which will be faster than the non-Ultra. WHODATHUNKIT?!!!

MuFu.
 
Knowing the weakness (performancewise) of the nv3x was poor fp32 performance and slow AA > 4x...my guess is if those numbers are accurate at all they were with AA > 4x and fp32 shaders. Anything else would suprise me greatly...even though those numbers are already very suspicious.
 
lost said:
Knowing the weakness (performancewise) of the nv3x was poor fp32 performance and slow AA > 4x...my guess is if those numbers are accurate at all they were with AA > 4x and fp32 shaders. Anything else would suprise me greatly...even though those numbers are already very suspicious.

Oh, another dependable pre-release milestone hit? "Possibly technically true in just the right conditions but utterly misleading performance benchmark reliably sourced to IHV hits net." Yeah, that would be about T-30, wouldn't it? But damn it, where are our pics?

And ATI still hiding in the bushes. . .
 
NV40 will use GDDR 2 or 3 memories as its memory controller is capable of both and Nvidia aims to get to 600MHz milestone - or should we say 1200MHz effectively. We heard before that it might end up faster but, in our view, that's simply impossible, since the PCB design would end too expensive and complex to produce. Another reason might be that it's not that easy to clock your card at more than 1200MHz.
The author obviously needs to learn a thing or two about memory. It's 600MHz DDR so who cares how difficult it is to clock a card at 1200MHz. Nothing will run at 1200MHz. It'll just use the rising and falling edge of the clock.

edit: in case it wasn't clear the quote was from the Inquirer article.
 
Heathen said:
I take it you've never done any large scale program managment before Chalnoth? Because if you had you'd nevre had made that comment. People don't go spending hundred of millions without studying, in great depth, the implications of all their decisions.
Of course they study the implications. That doesn't mean they judge correctly. That's what I was talking about. And the fact that companies don't often judge correctly is born out almost daily.
 
Chalnoth said:
Of course they study the implications. That doesn't mean they judge correctly. That's what I was talking about. And the fact that companies don't often judge correctly is born out almost daily.

As can be seen by the fiasco that was the nv30. ;)

I still find it amazing that you try to pick apart the r300 to pull some magical flaw out of thin air, yet ignore what a complete and utter disaster the nv30 was/is.
 
Chalnoth said:
Of course they study the implications. That doesn't mean they judge correctly. That's what I was talking about. And the fact that companies don't often judge correctly is born out almost daily.
Or in the case of nVidia, for the past year. ;)
 
Of course they study the implications. That doesn't mean they judge correctly. That's what I was talking about. And the fact that companies don't often judge correctly is born out almost daily.

Yeah it's a shame nvidia misjudged the situation so badly, never thought I'd hear you admit it though Chal.

Seriously though, you still show no real appreciation of the complexities of project management. The fact that the R300 and it's derivatives achieved such popularity on their own strengths proves ATi made a significant number of correct decisions. Defining 'correctly' is a very ambiguous in black and white is next to impossible unless you get a complete implosion of the company who made that decision and even then there are still benefits for the industry (whatever industry that is) assuming they're willing to learn.

In our specific case it could actually be argued that Nvidia is in a better theoretical position than ATi because they have more to learn from the NV3* project than ATi has to learn from the R3** project.

The only important questions are:
1) Can they learn from what's occurred? (So far the consumers seem more bothered about performance and IQ, and not theoretical objections to the differences between FP24 & FP32)
2) Can they (or more importantly are they willing) to apply these lessons to next generation parts?

If the answer to either question is no then nvidia is in trouble, ATi's issues are less appparent but still no less important for them to answer. Hopefully both companies are up to the challange as resting on your laurels and believing things rosy, while dissing the opposition, is the easiest thing to do in the world.
 
so is it 4x in D3 and 7x in hl2 faster than nv30 with cheats or without?

or are they both cheating?

IMHO to make that kind of performance delta on the same process same mem bus width they'll have to remove most cheats and brilinear for nv3x from next driver set.

Which is very decent of them, only that it will leave nv3x owners out in the cold.
 
vb said:
so is it 4x in D3 and 7x in hl2 faster than nv30 with cheats or without?

or are they both cheating?

IMHO to make that kind of performance delta on the same process same mem bus width they'll have to remove most cheats and brilinear for nv3x from next driver set.

Which is very decent of them, only that it will leave nv3x owners out in the cold.
Well, considering it's nVidia we're talking about here, I fully expect the numbers are not using the NV3x "optimized" paths in either software, a cheating driver for the NV40 and maybe even some ?xAA that requires the NV30 to fall back to (partial) SS.

cu

incurable
 
Back
Top