good boy. cookie?
marconelly! said:Ah, I see. Well, as it says, those are not even floating operations, programmable or fixed. Those are not even nvFLOPS. They probably count even the most stupid things like memory reads and such into these OPS.Watch the column that says 1.03-1.23 Trillions Operations Per Second
cthellis42 said:We're ALL used to "number claims" from companies right now, so what the hell is the point of arguing in life-or-death manner in regards to "IS!" or "IS NOT!" right now? We won't know the context of the 1TFLOP claim until Sony makes it clear, we won't know how to put it in perspective until we know the rest of the architecture, we won't know what that translates to in games and other applications until they are OUT and in front of us, and we probably won't even be able to see much of the "full picture" until a year or two AFTER launch, when developers have more time to bring out its capabilities. It's what we've seen before, what we see now, and is the ONLY think we can really "depend on happening" in the future.
chap said:Thus, i would assume the 1 trillion thingie IS the fabled NVFLOPS, no?
rabidrabbit said:awww that love-hate relationship I feel between those two is just so hearty.
Wonder if it is mutual, or is it gonna end up tragically unrequited
Ignore or filter the noise, that will raise the signal. Feeding the noise will degrade the signal ...london-boy said:don't know what more we can do.
ChryZ said:Ignore or filter the noise, that will raise the signal. Feeding the noise will degrade the signal ...london-boy said:don't know what more we can do.
The only thing I am sure of is that no matter WHAT the PS3 delivers, there will be a lot of arguement about it.
not that i want to insist anything, but lets just say the 200GFLOPS IS NVFLOPS. Any ideas on how they arrived at teh number?
Is there any reason for Nvidia to use "faked" FLOPS?
I mean doesnt that "fools" the developers?
Is it even "legal"?
Whats the "comparable" FLOPS ratings?
Do you know the "actual" FLOPS ratings of say, GF1 -> GFFX cards, so as we can try to establish a trend?
Hell, is there even a "right" way to calculate 3D rendering FLOPS?
Any good place to read more in details about this NVFLOPS? Of course any recommended sites to know more about FLOPS in current PC 3D hardware?
FLOPS is short for FLoating point Operations Per Second, and hence is used to measure the rate of computanional capacity.
Usually an 'op' is either an addition or a multiplication. Divisions are not counted as they are vastly more expensive to do fast, so much so that divisions are typically done by multiplying with the reciprocal ( a/b -> a* 1/b).
The numbers normally quoted in sales material is the theoretical maximum (the 'guaranteed not to exceed' number), which is often a poor measure of the true throughput of the chip/solution.
For nvidia to reach a number of 200GFLOPS they probably count every single floating point unit on the chip, assumne that each is active every single cycle and multiplies with the number of cycles per second. Looks good... But isn't very informative.
Take the R300 with 8 pixel shaders and 4 vertex shaders each of which can do 8 FP ops per cycle for a total of 31GFLOPS of shading power. One would assume that the NV3x would wipe the floor with R300 in FP. - it doesn't.
That's because alot of the FP power in the Nvidia number, goes into "hidden" stuff like triangle setup, FP<->int conversions, iterators etc. Whereas the 31 GFLOPS for the R300 is the raw shader power.
As for general purpose CPUs:
P4 can do 4 FP ops per cycle for +12GLOPS @ +3GHz, Athlon/Opteron can do a similar amount of ops per cycle but is clocked (alot) lower (But has fewer restrictions on issuing FP ops so probably has slightly higher throughput per cycle than the P4).
Gecko can do 2 FPMADDS (2FP adds and 2 FP muls) per cycle, so just under 2GFLOPS.
Cheers
Gubbi
As Gubbi pointed out FLOPs are somewhat of a generic measurement. To determine the flop rating of an IC, usually its fastest fp operation's throughput per second is measured. This makes flops one of the most meaningless performance metrics there is (even worse then OPs counts.). For an in-depth easy read on the various reasons (besides the obvious difference in arithmetic complexety between a fully IEEE-compliant 128 bit fp division and an arbitrarly rounded 16 bit FMAC, where the first one is regarded a single op while the second counts as 2...) for this i'd really recommend the beginning chapters of Computer Architecture by Hennessy & Patterson. Another good example is that when fp-coprocessor became add-in options for low-end desktop pcs, your flop rating actually shrinked when adding those while your performance went up, as emulating fp operations with int arithmetic usually yielded higher inst. throughput.
That's because alot of the FP power in the Nvidia number, goes into "hidden" stuff like triangle setup, FP<->int conversions, iterators etc.
chap said:Is there any reason for Nvidia to use "faked" FLOPS? I mean doesnt that "fools" the developers? Is it even "legal"?
chap said:Whats the "comparable" FLOPS ratings?
chaphack said:That be the book to read? Hmmm....say for someone young, with minimal knowledge of computers, is it digestable/fun if yeay interested in 3d graphiX?
well, if anything, it is just for fun reading with 3D graphiX. Hell no am i going to write some codes or develop a game!No idea, but just out of curiousity, how old are you anyway? If anything, I think programming would be a good start, as it could give you some perception as to what 'performance' is, the validity of memory and what potentially ultimate freedom can give you when developing something