Final Fantasy & NV30

Really? When? Before the end of the NDA? Surely not, so what's your point?

No i understand NDA's.

what i was saying is 51Gflops is still a *stat* and 100 dinos is a demostration of the chips power.. Both intended to *wow* the crap out of you. Ati released no total BS nonsense like this before the 9700 launched.

This is the same **** Sony pulled with the PS2 launch. All the crap about Gflops and the military stating it was a threat to national security.. and the 50 other total BS they spewed. Yet the final product was barley better than a Dreamcast. The very system it killed with all the outright LIES sony told.

Nvidia is doing the same thing. i dont think its cool, i dont think its funny etc etc etc. i think its detestable. They are just spewing forth outlandish stuff and vagueisms that they will NEVER be able to fullfill with the final product. But everyone will "ooooooh" and "aaaaaah" about it, and BELIEVE IT. They still have never even pulled off BS claims they made about the GF3 and its 2 years later.

I cant wait for the real Nv30 to premier. They had better pray to the god of skinny punks that it actually *Beats* the 9700 in FSAA+aniso tests on more than their Nvidia tech demos and jerryriged Applications designed on Nvidia hardware.
 
Hellbinder you have a point. The marketing numbers shown by NVidia are incredibly contrived. Even if the pixel shader unit does perform 51 GFlops peak in some way (i.e. 8 pipes, 8 half float vec3 MAD's per cycle at 400 MHz), it's meaningless. If they quoted a sustained figure, it would still be meaningless to the average consumer. Rendering 100 Jurassic Park dinos 100 times a second is downright misleading (not sure if ATI didn't do the same with Lord of the Rings...).

anyway. what can you do?
 
NVidia Marketing

Our Quality NV30 product takes only the best Silicon and feeds it a pure supply of hand picked electrons - each one perfectly round. We deliver each of these electons with pride and care exactly where they are supposed to go; each time every time, 24 hours a day all 4,406,400,000,000,000 Flops worth :)

Our chips are baked to perfection and cased with a sexy black ceramic coating that you almost wish you could wear. All our cards are perfectly machined and balanced to slot effortlessly into your computer or a money back guarantee.

Our rivals only wish they had the marketshare. On performance, well let me confide to you that our competitors are green with envy over the performance attributes our engineers tell us are important. Every major magazine will soon hail our presence, expect a NVidia chip in every chair of every international flight so you can deathmatch the hell out of the lucky bastards sitting in first class.

And more, much much more drivel...
 
For the end customer there is absolutely no relevance of either a peak theoretical vertex rate or a flops rating. Both are peak theoretical rates of some abstract computing resource that have no bearing on most games and there is no way for the average consumer to translate a flops or vertex/sec figure into actual performance on games.


It is the same situation in the computer industry. How does a consumer compare two CPUs? Clock frequency? SpecFP? Today, consumers have settled on the fact that they blindly just compare CPU frequencies which is why AMD had to market their chips with misleading model numbers so consumers could gave relevant performance. The closest thing to a "real world" benchmark of the TPC suite or SpecJBB.

Even people on these forums have a habit of latching onto a single figure: raw bandwidth or raw fillrate which have a way of disadvantaging cards like the Kyro in the same way that AMD is disadvantaged on the clock frequency race against Intel.

In the console arena, it was bus size. 8-bit vs 16-bit vs 32-bit vs 64-bit vs 128-bit, etc. Advertising and marketing directed consumers to decide on the basis of bus size.


I think it is hypocritical to criticizel NVidia for posting a peak floating point power figure while defending ATI for posting peak theoretical vertex figures (and of course, in the GeForce days, Nvidia was criticized for posting peak T&L and fillrate figures)

Nvidia is legitimately trying to "measure" performance of "smarter pixels". Comparing actual peak single texturing/no-shader fillrate is somewhat meaningless if one card can execute twice the number of shader operations per pixel.

We need a measure of GPU power that is in accord with DirectX9 -- shader ops per second.


My point is: 99% of consumers decide their purchases on mostly irrelevant performance spec grounds, whether it is cars, dvd players, CPUs, etc. It is really only the *-philes who (in some of the cases) understand what the numbers really mean.

Look at it this way: On paper, Parhelia looks like it rocks. Reality?
 
Fuz said:
Nagorak said:
I don't think gaming cards and rendering will ever converge to be honest. Maybe ATi and Nvidia will build farms outfitted with the same GPUs as their gaming cards, but they won't render in real time individually or together.

I disagree. I think eventually, pretty soon I bet, you will be able to buy a gaming card that is also found in a professional workstation.
Rendering that was once done on the cpu, or a whole bunch of cpu's will now be done on very cheap (relatively speaking) graphics chip or chips at a faster pace.

It doesn't mean it will be in real-time, but it will definitely be quicker than using a cpu.

They will converge.

I thought that's what I just said? Except I was looking at their convergence from a different angle...

I also don't think it is intuitive to believe that gaming cards and rendering cards would be the same. They may use the same chip, but the demands of rendering are totally different from gaming. A rendering card might have 4 GPUs on it, for example, and a lot of ram, etc. It's not just going to be the same exact card with a different name.

Also I don't see how render farms are going to save money this way. They move from CPUs to GPUs...so what? Yeah the GPUs will be faster because they are more specialized. Will they be cheaper? I really doubt it...maybe a little just to encourage people to switch over from their CPU farms.

Hellbinder[CE said:
]This is the same **** Sony pulled with the PS2 launch. All the crap about Gflops and the military stating it was a threat to national security.. and the 50 other total BS they spewed. Yet the final product was barley better than a Dreamcast. The very system it killed with all the outright LIES sony told.

The only threat to national security that the PS2 offered, was allowing hostile nations to see how dumb the American public was based on their bad taste in games. ;)
 
According to reports on the show floor, it was closer to 15 fps, not 2.5 fps. There's a big difference.

2.5 was for GF3 and 15 was for GF4

Indeed... The GS16 was running the scenes at 1080p@60fps. One can also sandbag the statement of "doing FF:TSW in realtime" depending on what scenes you're rendering... None of the demostrations have done the whole movie, just select scenes, and there's *a lot* of difference between rendering Cid, and rendering Aki or the marines in the outdoor scenes.

In the case of the GS Cube demo, data was being pushed from an Origin 3000, and scene dynamics were also being computed on the Cube (although Aki's hair dynamics were pre-computed)...
 
Pointless argument. Even my ZX-spectrum could render FF realtime in 2x2 resolution. Problem is, the movie wasnt rendered that way.
If NV is able to produce pixel-accurate frames of exactly the same resolution as the movie was, at the movie framerate, then they are rendering FF in realtime. The methods and algorithms dont matter, they may be doing it smarter or dumber, i dont care.
Otherwise they are just doing an approximation and subjectively claiming that the quality is "good enough". And, as we all know whats "good enough" for some isnt even barely acceptable for others.
Especially when the others are aware of insignia on the magic chip under discussion ;)
 
I'd actually defend the rendering "such and such movie in real time" statements if it is backed up with an actual demonstration that is impressive to look at (and really being done as presented). As I understand it, the first time they made the statement it wasn't, but subsequently they delivered where it counts: a scene that looked comparable to movie shots. The reason I'd defend this was because this wasn't an ad, it was a presentation evangelizing the possibilities of the power the card offered.

Now, all those other figures they are throwing around are pretty damned misleading and silly. Big numbers to impress the consumer with some inherent dishonesty (100 Jurassic Park dinosaurs comes to mind...the image you'd likely think of is a highly detailed and highly textured rendering of a dinosaur...but what does nVidia have in mind...a wireframe rendering?). But without using it to distort a comparison to another product (unless I missed something) I still can't fault them too heavily for it. EDIT: Well, actually I easily could, but I wouldn't choose to bother. :LOL:

Actually, I think the peak Gigaflop power is the most defensible of the figures since they are targetting competition with CPU rendering. Then again, I'm not sure of the actual reletionship of the figure to how it competes with them, so perhaps not.
 
DemoCoder said:
We need a measure of GPU power that is in accord with DirectX9 -- shader ops per second.

I have been thinking somewhat about this. We need a benchmark that can measure both simple, standard and complex lenght shaders with both a limited and masive set of data (e.g. texels tec.). I guess we also need to measure these under different bit precision.

To point of this benchmark is to disclose what the chip/architure is tuned to do best. If it is done well it would be interesting to see the grand R9700 vs CineForce FX (NV30).
 
Consider that all of the Jurrasic Park dinos were NURBS geometry - that means render times depend heavily on the tesselation level. Using a low enough setting you might get it down to 20-30000 triangles...
 
Back
Top