Latest NV30 Info

Joe DeFuria said:
Actually, I don't recall. (Was that a rhetorical question that I'm supposed to know?) ;)

You were supposed to know – or at least look it up in our review! ;)

But, no it’s only enabled with FSAA being enabled. i.e. its an example of a bandwidth saving technique that isn’t enabled full time, and designed that way.

Well, all compression schemes (save DXTC) are lossless. All memory savings techniques are not about data compression though, correct? (Occlusion culling....) The anology was not meant to be taken literally.

That’s why I think it’s a particularly poor analogy in this case. All the bandwidth saving techniques so far are about no loss of visual quality. Occlusion culling techniques are about removing what’s not seen, not altering what is – again, your analogy does not apply.

For example...remember those 3dfx "HSR Drivers?" Didn't they have some type of "slider" (or registry setting) to adjust the aggressiveness of the algorithm used? The more aggressive, the faster the performance, but the more artifacts?

If you are actually predisposed to believing that these were anything more than a desperate joke then this is actually an example that would probably never mak it into hardware – the drivers attempted to ‘predict’ if a polygon was occluded or not, however there’s no guarantee it was going to be right. This is an example of ‘lossy compression’ in one sense, and highly unlikely to make it into anyones hardware.
 
But, no it';s only enabled with FSAA being enabled.

Why?

Occlusion culling techniques are about removing what’s not seen, not altering what is – again, your analogy does not apply.

My point being there is code (hardware) that must determine what is seen and unseen. The more "aggressive" the culling, the more stuff you remove, and the more chance of

1) removing stuff that isn't actually supposed to be removed. Call it "lossy" or call it "not enough precision / informatoin to correctly decide".

2) Remove stuff that is visually OK to be removed, but has some adverse effect on some game code regardless.

I submit that with alsmost any "automatic" occlusion culling, there can be adverse effects with software that was designed without having the benefit of being tested on such hardware. It can be really horrible game code that "shouldn't be doing X or Y, because it's bad form", but that's irrelevant.

– the drivers attempted to ‘predict’ if a polygon was occluded or not, however there’s no guarantee it was going to be right

How is this any different from IMR hardware occlusion culling? That's what they do: "predict" if a polygon is occluded or not. (Unless the game is specifically coded for in a particular way, with occlusion culling in mind.) What you are saying, is that IHVs will never implement such a technique, simply because the results are not visually 100% correct in 100% of the cases?

If it boosts 3DMark 2003 and Doom3 benchmark scores....do you really think IHVs wouldn't consider it, it means some textures or polygons flash in some lesser known and lesser popular game / benchmark?
 
err no. IMR hardware occlusion culling (i.e. early z) *detects* whether pixels are occluded or not. AFAIK there are no speculative operations involved. The algorithms fundamentally are not lossy.
 

Because you are far more likely to be able to compress with MSAA than without. With 4XMSAA its likely that the majority of the 4 block subsamples are likely to be exactly the same colour – its only those subsamples that span edges that aren’t. Most compression schemes work on block of pixels as well, and the block are sort of inherently built in with MSAA. Under normal rendering its much less likely that neighboring pixels will have the same colour, hence the compression would be much lower – presumably ATI felt the processing overhead would not be worthwhile in comparison to the bandwidth saved.

How is this any different from IMR hardware occlusion culling? That's what they do: "predict" if a polygon is occluded or not.

Wrong – when removing occluded pixels IMR’s test visibility, they do not ‘predict’.
 
A bit offtopic, but Mike.C over on NVNEWS, posted the following:
NV30 In January '03 - 11/07/02 4:27 pm - By: MikeC - Source:
While listening to NVIDIA's earnings conference call, CEO and President Jen-Hsun Huang basically stated that NV30 will be shipping in January, 2003.
Cost of developing entire NV3X product line will be close to $400 million.

Well. that was pretty much to be expected...

Whether that means "mass production" or not still remains to be seen...

I still think we might see NV30 on the shelves on December...
 
alexsok said:
Whether that means "mass production" or not still remains to be seen...

Based on Jen-Hsun's brief statement on the subject, there was no indication that NV30 would not be in mass production by then.

Damn, I hate when I use negative connotations in a sentence :)

Edit: Looking back at my notes, he said something to the effect of "production shipping in January quarter." Now that I think about it, "January quarter" doesn't make sense. Will listen to the audio clip again.

Edit 2: The exact words were "expect to begin production shipments in our January quarter" and takes place a little past the 19 minute mark.
 
Now that I think about it, "January quarter" doesn't make sense. Will listen to the audio clip again.

It makes perfect sense....from a PR perspective. :rolleyes: State it in a way that at a cursory glance sounds like "January", but technically / legally could be as late as March...

It's almost as elusive as the last estimate "Christmas Season".

Do you think those statements were made that way by accident? ;)
 
Well all those extra DDRII ram being stock piled by Samsung could now be sold to ATI for a cheap price :).
 
OK, now, here's a question.... are all those people - and you know who you are - that said that the NV30 would be available by Christmas gona step up to the plate & eat a little crow? :rolleyes:
 
martrox said:
OK, now, here's a question.... are all those people - and you know who you are - that said that the NV30 would be available by Christmas gona step up to the plate & eat a little crow? :rolleyes:

Should they really have to?

Yesterday, as reported on CNET, an nVidia representative said "...that full volumes may not arrive in time for Christmas but said Nvidia will have some of the chips on the market by then. "

And today the CEO says it will be early 2003 before they are commercially available.

It sounds like either the nVidia representative or the CEO should be the ones with the crow bibs on :D
 
Not at all... ".. chips on the market.. " can simply mean a dozen review protoboards to meet website/magazine hype circles, which still translates (as listed on CNET) in the average Joe not being able to actually go out and buy one until next year.

I have to raise an eyebrow over all this nonsense since I have a sneaking suspicion that cost/development time isnt the full source of the delay. There is definately something else at work here...

I'm betting that one or more marketing/release ploys are also underway to help seal initial release success. Be it an infusion of $$ and caffeine into MadOnion to produce a 3dmark2003 with a special "NV30-only" test that will artificially inflate scores by 500-1000 points, a special NV30 only UT2003 patch.. or some very unusual Detonator drivers that are able to muster another 20-30% of performance in 3dmark2001/Quake3.

These are my predictions since, from all speculations and best guesses on the NV30, it will need some form of extra gusto on release date early next year in order to do more than just simply "compete" with the (by then) long standing 9700 Pro.
 
Not at all... ".. chips on the market.. " can simply mean a dozen review protoboards to meet website/magazine hype circles, which still translates (as listed on CNET) in the average Joe not being able to actually go out and buy one until next year.

Or filling priority OEM orders (e.g. Apple, Dell, etc.), before board venders get their orders filled...

I'm betting that one or more marketing/release ploys are also underway to help seal initial release success. Be it an infusion of $$ and caffeine into MadOnion to produce a 3dmark2003 with a special "NV30-only" test that will artificially inflate scores by 500-1000 points, a special NV30 only UT2003 patch.. or some very unusual Detonator drivers that are able to muster another 20-30% of performance in 3dmark2001/Quake3.

Meh, I'd be happy with a *non-beta* Cg toolkit with a complete CineFX profile...
 
A little tidbit from Reactor Critical:
It seems that the NV30 is a luckless graphics processor for Nvidia and its partners. There were numerous problems and delays with it and there are still a lot of questions in regards pricing and availability of this part. In May this year Nvidia promised to announce the monster in September or August. The VPU was planned to be so powerful that it should be able to eat the current graphics processors for breakfast, the RADEON 9700 PRO for dinner and the up and coming VPUs from ATI Technologies and 3Dlabs for supper. Well, in June or July they promised to reveal the novelty in October and a couple of weeks later the CEO indicated that the company wants its partners to start selling the graphics cards on the NV30 by Christmas season.

Well, no I was told by some of our sources that Nvidia will only be able to supply its AIB manufacturers with handful of the NV30 graphics processors on the 10th of December. There will be not a lot of VPUs at all, so, I do not think that the Santa Clara-based developer will receive a huge profit from these chips. I believe they just want us to realise the performance and features advantages the NV30 can provide both to the end-users and software developers.

Furthermore, it seems that Nvidia again has to make certain redesigns of the VPU in order to achieve the acceptable yields and also lower the manufacturing costs. Maybe now the NV30 will offer less performance, however, it will be a bit cheaper compared to the original chip.
 
alexsok said:
Why that?
There are many reasons for that... most of them can't be disclosed.
Well, the reasons that can't be disclosed are less and less attractive with time? saying that the aracteristic of the NV30 is going downward with time? Otherwise i don't see why!
 
Well, the reasons that can't be disclosed are less and less attractive with time? saying that the aracteristic of the NV30 is going downward with time? Otherwise i don't see why!
As I said, reasons that can't be disclosed, you made your own conclusion! :)

I'm not the one to confirm or to deny that... ;)
 
Back
Top