NV30 : not so fast?

Ok, RC says that it "taped out" back in May. The CEO says last week that is has not (at least at the time of that interview). Can someone please get a straight comment from Huang (SIC)!
 
Didn't Pixar say that you needed just over 20 gb/s to render toy story in real time? Anyone remember what that quote mentioned?

Yes, I have been trying to find this quote for the last 3 months, I believe if my memory doesn't fail me that it was used when nVidia was talking about the GF3, does anyone remember when they were talking about "Toy Story" quality? and the quote came up that they could get close to that but the figured they would require over 20 gb/s to make it happen real-time.

hmm.. off to google I go...

-Simon
 
DaveBaumann said:
We were correct in almost all our predictions except the memory speed. Nvidia wants it to be about 1 GHz delivering amazing 48 GB/s bandwidth when accompanied by the 3rd generation of their LightSpeed Memory Architecture. We are not sure that Samsung will be able to deliver them 1 GHz DDR-II memory by September.

We are not at liberty to discuss this, and I doubt they are either if they gained this through official channels. They did, however, miss a very important word out when talking about that bandwidth number.

:eek:

Goddammit, I just read again what Dave wrote in his short news bite about NV30 specs and bandwidth.

The 'focus on computational efficiency' suggests that there are new overdraw reduction routines in the pipeline as doing this level of computations per pixel can end up as being very expensive for pixels that are overdrawn, however it appears that no details of these routines are in the public domain as yet.

Oh my, could they really be going down the Gigapixel road - or use other form of deferred rendering - after all this "we don't need to do that"? Could they be going the full way instead of this LMA I & II?

Oh my, and maybe the rumoured "first" tape-out in May/June was indeed just a to get a silicon test of the part of the chip that handles this new deferred render architecture. All this crazy rumours makes sense this way...

Somebody, hand me a cold beer now... 8)
 
RussSchultz said:
Isn't it great how unsubstantiated information quickly becomes fact?

I agree entirely. Don't you love how nvidia PR machine works? Seems like a constant flow of trash now about the nv30..all based on vapourware AFAIK.

http://www.3dgpu.com/yabb_se/index.php?board=2;action=display;threadid=1063

http://www.hardforums.com/showthread.php?s=a06c34baa3e141450f0fa956e43e656b&threadid=468011

http://www.rage3d.com/board/showthread.php?s=df83e935c371d137c721ef229a18d7c6&threadid=33629771
 
You mean web sites weren't carrying "trash", I mean, endless speculation, rumors, and feature leaks in the months leading up to the R300 release?

Geek_2002, please take off those glasses.
 
DemoCoder said:
You mean web sites weren't carrying "trash", I mean, endless speculation, rumors, and feature leaks in the months leading up to the R300 release?

Geek_2002, please take off those glasses.

[sarcasim]Yeah I remmember all sorts of rumors about the R300 being 3 times faster then the geforce 4 ti4600[/sarcasim] I don't remmemeber though any PDF from ATI being circulated giving misleading info with regards to the R300. Or for that matter ATIs CEO claiming that the R300 was taped out when it clearly wasn't. Further if you are also saying that you think that the information is "trash" as well, then we are in full agreement.

Surely your glasses have a few cracks in them since this nvidia/nv30 debacle has started. If not then your glasses are surely a considerably stronger perscription then mine. ;)
 
Simon Templar said:
Didn't Pixar say that you needed just over 20 gb/s to render toy story in real time? Anyone remember what that quote mentioned?

Yes, I have been trying to find this quote for the last 3 months, I believe if my memory doesn't fail me that it was used when nVidia was talking about the GF3

Perhaps a little late to answer this now, but I'm fairly certain that it was at the launch of GF2 GTS.

I also have a vague recollection that Pixar mentioned something like 91 Gb/s, not twenty, but I'm very far from sure of that part (after all, it wasn't yesterday I read it).
 
I thought the quote had to do with the number of triangles per frame?
(80 million is popping into my head)

Serge
 
Allow me to bring u a quote from one of nvidia's papers:

Alvy Ray Smith (MS Graphics Research Fellow &
Pixar tech guy) would like 80M polys per frame
• That’s 4.8 billion polys per sec at 60Hz

That answered your questions? :D
 
psurge said:
I thought the quote had to do with the number of triangles per frame?

Quite possible, I believe nVidia was making noise about its T&L, but there was definitely a memory bandwidth figure involved in Pixar's somewhat mocking response.
 
I'm not sure about workstation cards but fastest consumer cards on the 3dmark test can do 40M/s and that test is around 1M/frame. Realisticly with textures etc then its about 10M/s currently which is 166,000 per frame. Its going to be quite a while before we start seeing 80Mpolys in graphical demos even more for games.
 
You mean web sites weren't carrying "trash", I mean, endless speculation, rumors, and feature leaks in the months leading up to the R300 release?

You know... I've been scouring all over the 'net trying to find one (1) previous GF4 p(review) that states the ".. but the R300 will be X times faster than the GF4 ..." like most all the current R300 previews do concerning the NV30.

I also havent been able to find a single article listing specifics of the R300 pre-taped in spuriously quantified proportions of performance compared to either 8500 or GF4 either.

Does anyone have any links handy for these as I must have missed them totally.

As far as ReactorCritical goes- I personally wrote off anything they have to publish as far back as the days they were discovered using PhotoPaint/gaussian blur algorithms applied to screenshots and pawning them off as being actual product screenshots for comparison purposes. Of course, the untouched screenshots were of NVidia, photopaint gaussian blurred being a competitor's screenshots.

I've also not seen a single comprehensive analysis concerning Quincunx screenshots versus final image quality either as screenshots shown of the GF4 are not representative of image quality (as already has been discussed on another thread here). Amazing they didnt decide to bust out the gaussian blur algorithm to illustrate this rather obvious issue. :)

Well, with RC busting out the paint/editing programs for screenshot comparisons, as well as now redefining what "taped-out" means for videohardware.. and now claiming specific quantified performance of vaporware for NVIDIA (and yet no such claims with BitBoys when presented the same kinds of data for making such determination..).. it only goes to prove that Inquirer would be a more noteworthy and accurate source of information. :)
 
Time to move on?

They edited the pictures to illustrate an effect they could not capture by screenshots from what I remember. Not a very nice thing to do, but everything is not a conspiracy opposed to what you and most people on rage3d believes.
 
I honestly remember hardly any rumors about the R300. The only one I remember is a small tidbit at RageUnderground stating it would be 30% faster than a 4600, but that may be selective memory. All I know is I was very pleasantly surprised at the R300's speed, so much so that I consider it really worth $400.
 
The only serious R300 "review tainting" I remember, were Anand's repeated "R300 will put everything currently in this test to shame"-style teases during the UT2003 GPU roundups, which must have been a month or two prior to R300's announcement. When the batch of GF4s were released I guess almost nobody outside ATI had any real clue on what to expect of it, so how could anyone speculate? Well, maybe JC did know something but I rarely see him writing lengthy videocards reviews... ;)

ATI kept its deck of cards pretty tight this round, there were little leaks about R300 long beforehand, but that was mostly due to ATI not letting any info out on purpose (except selected developers). Nvidia on the other hand apparently likes briefing the press and even more so developers quite some time in advance concerning their upcoming products and likes timing their PR with competitors releases (even though ATI seems to pick up on that, like launching Catalyst the same day as Cg, hehe). Wether that is good or bad depends on personal preference IMO, both aproaches have advantages and disadvantages.

I would advise against confusing rumors brought up by some website or forum f@nboys with actual company info though. Just because some people speculate on features/performance doesn't mean that the Nvidia PR machine needs to be behind it all - even Nvidia has limited manpower and can only attent to so much misinformation at a time! :LOL:
 
PR can be good or bad. On the one hand it gets people talking about your product. On the other hand it raises expectations. Now everyone is expecting the NV30 to be something great and compared to the Ti4600 it probably will be. Unfortunately the bar of greatness has been raised by the R300, so if it doesn't outperform it all this hype could very well blow up in their face. ("It's only as fast as the R9700? I'm just going to get one of those them!" or worse, "It's slower than the R9700, wtf!").

Now, granted it may be faster, but since they've hyped it to the moon, if it's not it will look really bad for them. Personally I'm interested in seeing the product to see how it will perform. I really would like if they could get it to the market some time soon. It will also lower prices across the board. Unfortunately I just don't see that happening (despite what RC may say to the contrary). :-?

One thing though: it'd be nice if more people viewed hype and rumors with more skepticism. It seems like a lot of people just get taken by this stuff hook, line and sinker.
 
The poly power is fudged. Their sources may have mistaken the number ;) . The 3DMark stuff is q bit exaggerated. From what I know, it will achieve that kind of gain over a 4600 in special conditions of extreme stress being laid on the vid card. OTOH, the next incarnation of 3DM may show exactly that kind of gain. Doom 3 performance is an estimate and it WAS NOT taken on SIMULATORS, but on the latest beta board. The tape out stuff is like this: the final chip should come back these days, beta chips and boards have been around for some time. Bandwith is indeed calculated using some kind of owerdraw removing method, but I am not at liberty to discuss it 8) . Hope it helps extinguish the flames.
 
Testiculus Giganticus said:
The poly power is fudged. Their sources may have mistaken the number ;) . The 3DMark stuff is q bit exaggerated. From what I know, it will achieve that kind of gain over a 4600 in special conditions of extreme stress being laid on the vid card. OTOH, the next incarnation of 3DM may show exactly that kind of gain. Doom 3 performance is an estimate and it WAS NOT taken on SIMULATORS, but on the latest beta board. The tape out stuff is like this: the final chip should come back these days, beta chips and boards have been around for some time. Bandwith is indeed calculated using some kind of owerdraw removing method, but I am not at liberty to discuss it 8) . Hope it helps extinguish the flames.
HA! What makes your pro nvidia vaporware rumors more believable than the anti-nvidia rumors we already have!?

(sorry, just had to beat everybody else to it) ;)
 
Back
Top