Why are you so skeptical about the chip? If this is your opinion, you will be surprised in the next months.Worst case? For all we know, it could be another NV30-case, the worst case is much, much worse than that
Let's look to the worst case: HD 5870 X2 is 20% faster than Geforce 380. Then you must ask you for which price, because Geforce 380 is a Single-GPU (no Multi-core profiles, no micro stuttering etc.), who will not consume more energy than GTX 280. If I look to HD 5870 X2, I hope it will consume under 275 watts.
They are probably both already VLIW+SIMD at this point (well NVIDIA is more LIW+SIMD but same difference). Whatever else happens VLIW is there to stay for a while yet IMO.
Hmmm in the course of a day we've gone from GT300 showing up in 6 months to GF100 being as fast as RV870X2. Can't wait to see where we end up tomorrow
Why are you so skeptical about the chip? If this is your opinion, you will be surprised in the next months.
I still don't think it will ship this year ... hell, I don't think we will get a clear shipping date when the first official information is released.Hmmm in the course of a day we've gone from GT300 showing up in 6 months to GF100 being as fast as RV870X2. Can't wait to see where we end up tomorrow
Why is GF100 late? How can you judge in this direction? I reported in January that Nvidia's next generation chip will come in Q4/2009. So where do you see a delay?Chips being late month(s) haven't had a good track record usually
I still don't think it will ship this year ... hell, I don't think we will get a clear shipping date when the first official information is released.
Anyone else getting the feeling that two parallel Universes have become entangled? With the different names, dates, specs, problems/non-problems it's like we're talking about two different parts from two different companies.
I need a lie down.
I'm not making any bets anymore. Last time I had to write a public apology to Rys LOL.
It is not always an advantage to try to be a man of your word, yet a promise is a promise. Days before the G80 launch while chatting in the B3D IRC channel I told Rys that if I manage to buy a 8800GTX before the start of December I‘d owe him a public apology. G80 was officially announced on November 8 and I was holding a Gainward Bliss 8800GTX in my hands on the 14th. Thus, I have no other choice than to send him my apologies. Don’t worry though this one is amongst the cases where I simply love being wrong.
D3D10.1 requires 32 vec4 attributes per vertex to be supported, as opposed to 16 in D3D10. So that doubling in interpolation workload might steer the architects in the direction of increasing interpolation rate. Except, of course, that merely by adding ALUs, the increase occurs. So really what it comes down to is rasterisation:interpolation rate.Wouldn't SFUs mostly scale by texture unit, rather than SP?
If textures remain fixed at 8 per TPC, I would think 4 SFUs would suffice (assuming, as you say, that these aren't indicating per-lane).
Hmm, are you arguing for moving some of the "SF" instructions into the SPs? I would think log/rcp would be really useful there (hence my previous link). SF could be relegated to sin/cos approximations, or those blue dots might be something else entirely.
There are so many ways to make a chip "complex". Do it smartly and you can get way more performance - it's a question of how radical you're prepared to be.Why would a chip with +50% complexity be "minimum on par" with dual HD5870?..
We can certainly hope for something like this. But RV670->RV770 was accomplished by eliminating mostly obvious mistakes in the R600 design and by some magic which allowed them to pack 2.5 times more ALUs in almost the same complexity (transistors and die size). So while we can hope for GF100 to somewhat repeat that success i'd say that counting on it as a "minimum" is highly unrealistic.Anywya, I'm still expecting NVidia to be pretty radical. NVidia's RV670->RV770 as it were. Plus some, with a bit of luck.