New are you ready video

Chalnoth said:
mboeller said:
An description of PVR-TC would be nice!

Is it based on this? : http://www.acm.org/sigs/sigmm/MM2000/ep/levkovich/

No, they use a form of TC that's called vector quantization.

Read Simons posting again!

I don't see VQTC (2bpp) [...] being adapted over S3TC (4bpp or 8bpp) despite the higher compression rate because the hardware engineers don't really like it. [...] I suppose the PVR-TC might be a replacement candidate but for the moment ...

PVR-TC isn't VectorQuantization TC!
 
That was a very interesting video. Sounds like they surpassed requirements to design advance missiles and nuc weapons with in house simulations. They actually need all that computing power? Sounds like Nvidia doesn't take to many chances with the designs, test everything first in simulation. Now only if the manufacturing end part was under their control :).
 
I wonder if ATI throws all that hardware at it as well? Or does ATI have even a better way of getting their chips designed?
 
noko said:
I wonder if ATI throws all that hardware at it as well? Or does ATI have even a better way of getting their chips designed?

They design the chip in pretty much the same way.
I doubt their engine room is anything like the engine room at NVIDIA... although NVIDIA's engine room is nothing like Intel's engine room, so I might be wrong there... :)
 
mboeller said:
Simon F said:
I suppose the PVR-TC might be a replacement candidate but for the moment it's only in PowerVR's MBX and, furthermore, the last time I spoke to MS's DX team they weren't keen on newer texture comp modes.

An description of PVR-TC would be nice!

Is it based on this? : http://www.acm.org/sigs/sigmm/MM2000/ep/levkovich/
No. That's completely different.

Actually, at a glance, that seems like a more sophisticated version of "CCC: Color Code Compression" that was in Siggraph some years back. That was a 2bpp method that had just one global palette and then each block selected a subset of palette entries (just 2 in this case). Each pixel in the block then indexed the local palette. I get the feeling that S3TC was inspired, to some extent, by the CCC method.

[Edit] Hmm I missed the "interpolation" bit of that technique. Seems like it borrows some of the S3TC ideas[/EDIT]

As for PVR-TC, I'm not at liberty to discuss it just yet but I suspect that it will become public soon enough.
 
Chalnoth said:
mboeller said:
An description of PVR-TC would be nice!

Is it based on this? : http://www.acm.org/sigs/sigmm/MM2000/ep/levkovich/
No, they use a form of TC that's called vector quantization.
As Mephisto pointed out, VQTC != PVR-TC.

Basically, the main drawback of the {VQ}technology is the very long compression times
Define "very long"? On my clapped-out PII 300, VQ compression of a 1024x1024 MipMapped texture took about 30s using a rather extensive Generalised Lloyd's Algorithm (GLA) search (i.e. not just Tree-based searching which is much quicker but not as good). Obviously that's not fast enough for dynamic upload speeds, but it's fine for interactive development.
That's not(essentially requiring pre-compression by game developers...something they've been rather hesitant to do).
?!
Various developers (not universally of course) have been using assorted texture compression methods for quite some time. For example, palettised images are a simple form of VQ compression.
I'm still kind of foggy on how exactly the technique works...from a first-glance, you'd think it would also be rather poor for realtime graphics (i.e. low decompression time for entire image, but it really looks like there would be more decompression time if you just want a few texels at a time, which is the norm in 3D graphics), but it seems to work.
Of course it works. VQ textures on DC on average were faster than 16bpp textures because of the increased cache hit rates and decreased bandwidth usage. Of course, if you only used a few texels out of every texture in every scene it'd be slower, but that doesn't happen in practice.
 
From Mike.C over at nvnews, some bits from the video:

*bleep* contains more patents going into it than anything else in NVIDIA history

*bleep* will be the most well engineered and most powerful product out there

Take a special note of the first tidbit and remember all the patents nAo keeps us posted of lately...
 
Take a special note of the first tidbit and remember all the patents nAo keeps us posted of lately...

Strikes me as a slightly daft statement - I'll wager that GF4 contained more technology patents than GF3 and GF3 than GF2 etc., etc.
 
Strikes me as a slightly daft statement - I'll wager that GF4 contained more technology patents than GF3 and GF3 than GF2 etc., etc.

Yeah, but it says there "than anything else in NVIDIA history", I wouldn't compare that to previous NVIDIA cards...

Maybe I'm wrong here, but there is a big amount of NVIDIA patents right now... all probably for the NV30... such an amount of patents didn't go to previous NVIDIA cards... correct me if i'm wrong...
 
alexsok said:
Yeah, but it says there "than anything else in NVIDIA history", I wouldn't compare that to previous NVIDIA cards...

Ummm, if it included tech from one more patent than GF4 that would more than anything else in NV history, wouldn't? This is just another form of PR dickwaving - have you done an analysis of the number of patents granted to ATI or anyone else over the same period?

As for the number of patents that have been granted, remember that many of them are relating to GigaPixel and 3dfx technology that had been applied for prior to the end of 1999, not all of which may or will be used.
 
alexsok said:
Yeah, but it says there "than anything else in NVIDIA history", I wouldn't compare that to previous NVIDIA cards...

OK, I've made my opinion of the first released video pretty clear :LOL:. The 2nd and 3rd are clearly spoofs, and done well enough to be humorous. Do you really want to take lines from a spoof and treat them as significant statements on the "power" of the nv30, alexsok?

*ponders*

I guess that was a silly question, as those lines are all there is to go off of, aren't they? :-?

Aren't you beginning to build up a tolerance to the nv30 hype yet?
 
Ummm, if it included tech from one more patented than GF4 that would more than anything else in NV history, wouldn't? This is just another form of PR dickwaving - have you done an analysis of the number of patents granted to ATI or anyone else over the same period?

As for the number of patents that have been granted, remember that many of them are relating to GigaPixel and 3dfx technology that had been applied for prior to the end of 1999, not all of which may or will be used.

Nope, I haven't done an analysis of that, simply since I haven't seen any lately... :)

Yeah, i'm well aware that most of the patents relate to GigaPixel and 3dfx, but that's not a bad thing now, is it? It's just that lately I've seen many patents from NVIDIA (from this year, not the ones mentioned above), so I tend to believe the statement in that video... maybe I'm wrong, who knows... but this all comes together pretty well with the CEO's statement about the NV30 being "their biggest contribution to the graphics scene since the founding of the company".
 
Aren't you beginning to build up a tolerance to the nv30 hype yet?

Yeah, I do m8.

To tell u the honest truth, NV30 didn't turn out to be as good as I expected it to be and I was warned about that by many people (including Dave - remember? ;) )
 
Yeah, i'm well aware that most of the patents relate to GigaPixel and 3dfx, but that's not a bad thing now, is it?

Eh? Of course its not a bad thing – to a semicon company its IP portfolio is one of its most important assets (remember, that’s all NVIDIA paid the 100Mill to 3dfx for, nothing else). But that doesn’t necessarily mean that all of it will be used in their products, especially when the tech is bought in from elsewhere.
 
Eh? Of course its not a bad thing – to a semicon company its IP portfolio is one of its most important assets (remember, that’s all NVIDIA paid the 100Mill to 3dfx for, nothing else). But that doesn’t necessarily mean that all of it will be used in their products, especially when the tech is bought in from elsewhere.

Sure, but there is a possibility that some of it will be used in NV30...

Well, the annoucement is not too far away... we should know most of the things then...
 
Eh? I thought the end cost to NVIDIA was more like 10million.

Otherwise the 3dfx shareholders might have actually gotten something from the buyout.
 
Russ,

I found these press releases at NVIDIA's web site.


April 19, 2001: NVIDIA Subsidiary Completes Its Purchase of Certain 3dfx Graphics Assets

On December 15, 2000, NVIDIA US Investment Company, NVIDIA and 3dfx signed an asset purchase agreement for NVIDIA US Investment Company to purchase from 3dfx certain graphics assets, which include, but are not limited to, patents, patent applications, trademarks, brand names and other graphics related assets. At the closing, NVIDIA US Investment Company paid $55 million in cash to 3dfx. Subject to the satisfaction of certain additional conditions, 3dfx may receive additional consideration from NVIDIA US Investment Company in the form of cash and/or shares of Common Stock of NVIDIA Corporation.

http://www.nvidia.com/view.asp?IO=IO_20010530_5354


December 15, 2000: NVIDIA To Acquire 3dfx Core Graphics Assets

Under the terms of the agreement, NVIDIA will pay to 3dfx a total consideration of $70 million in cash and 1 million shares of common stock.

http://www.nvidia.com/view.asp?IO=IO_20010612_6602
 
To tell u the honest truth, NV30 didn't turn out to be as good as I expected it to be and I was warned about that by many people (including Dave - remember? )

Well, I don't know of course if how you now think the NV30 "turned out" is actually as is turned out, but I'm less concerned about warning "you" about delivering high expectations, than I am about sites like Anand.

(Stands on small soap-box)

In past articles Anand has made repeated allusions to NV30 specs and performance being greater than that of R300, with such proclamations as "Believe it or not, on Paper, NV30 is even faster than the R300".

What I said about this back then seems might come true again.

Until a company officially publishes specs, no respectable hardware site should be making ANY types of relative performance predictions...EVEN just on paper. There's a reason why specs are not officially publicized until relative close before shipping....because they are subject to change, sometimes drastically.

ESPECIALLY with respect to clock speed. I've mentioned several examples in the past where nVidia either definitely, or more than likely, missed internal clock speed targets: X-Box Chip (definite and very significant downclock) , TNT (definite, significant downclock) , GeForce1 (likey downclock, IMO). Based on rumors of specs, I'm fairly certain that nVidia was throwing around clock speeds in the 400+ Mhz range for NV30 this past summer, Giving Anand reason for his high anticipation of the part. (Which Anand will happily pass on to readers...)

And if they achieve that 400+ MHz...great. But the point is, it wouldn't surprise me in the least if they fall considerably short of that target, especially given the TSMC problems.

It would be interesting to see who Anand "blames" for his mispredictions if the NV30 is only "merely competitive or even worse" than the 9700. Will he blame himself for giving public info prematurely? Will he blame nVidia for not "delivering what he expected"? Will he just ignore it all together? In any case, perhaps he will learn something from it...again, assuming the NV30 does not meet the specs that Anand was lead to believe...
 
Back
Top