I thought he is joking
...and I wan't obviously sure about it.
I thought he is joking
Sarcasm. My apologies for not being able to come up with more ridiculous sounding words to make it more obvious.Let's see what silent_guy's answer to the above question will be. Unless of course it was some form of sarcasm, which I of course with my equal to 0 knowledge on matters like that cannot detect.
Give this man a cookie for reluctantly including the most probable option!... or whatever.
Oh dear, well I certainly hope developers target something less than a 5760x1200 resolution for their games because all we'll end up with is a lot of ugly. Do you really look forward to running the best that the Xbox and PS3 can do at the end of their lifetimes blown up in all its unrefined glory? There's something to be said for playing less demanding games at a higher resolution but there are much better uses of the available horsepower IMO. In this respect I think Nvidia's strategy is more potent in the longer term because at the end of it all Eyefinity is just upping the resolution.
I wonder why the would neeed to do an A3 "to kill time". If A2 would have been good and just the process at TSMC not ready, they could have waited with the mass production and go with A2. NV must have gotten something from A3, be it higher clocks or better yields or both or whatever.
Time is actually working against them.
The best you can do is speculate about the general likelihood about the nature of a metal spin. My personal take on this is clear: At least 95% of all fixes are pure functional bugs. A very generous 4% are timing related. 1% are yield. (On all the chips I've worked on, I know of only 1 metal ECO, out of thousands, that was yield related.) And that's really the best you can do. Yet reading the comments here, it's all about getting clocks up by some fantastic amount or doing magic tricks with yield. Some even persist in claiming that you can reduce leakage this way. All with a metal spin... Go figure.
but objectively I don't think you can disagree that seeing 3 times more game content is a major change of the gameplay.
Originally Posted by Rahja the Thief
At midnight (EST, -5 GMT), I will fill in the blanks.... MUHAHAHAA.
GF100 outperforms ATi's 5870 by 46% on average
GF100 outperforms ATi's 5970 by 8% on average
The GF100 gets 148 fps in DiRT2
The GF100 gets 73 fps in Crysis2
The GF100 gets 82 fps in AvP3
*GTX misnomers removed due to Business NDA*
GF100's maximum load temperature is 55 C.
The release week of GF100 is Mar 02nd
Blackberry ordered about a million Tegra2 units for their 2011 smartphone.
Apple ordered a few million Tegra2 units for the 2011 iPhone.
Nintendo ordered several million Tegra2 units for their next gen handheld (the DS2!)
*Removed: Under business NDA*
That's all for now kiddies! See if you can guess for the time being, each - represents a letter or number
Extra spoilers!
- GF100 and GF104 will feature a new 32x Anti Aliasing mode for enthusiast setups.
- GF100 and GF104 can do 100% hardware based decoding for 1080p BluRay and H264 playback.
- GF100 and GF104 feature full SLi capability and a new scaling method for rendering!
- GF100 and GF104 will provide full on chip native C++ operation for Windows and Linux environments. This will be further augmented with CUDA and OpenCL.
- GF104 will feature new technology designed for UHD OLED monitors!
- GF100 promises to deliver at least 40% more performance than the GTX295 for less money. GF104 promises double that.
No one has already heard about this post?
http://www.guildwarsguru.com/forum/rahjas-freeeeeeee-t10420384.html
Any comments? Fake or not in your opinion?
I'm more for the "fake" option...
I'm more for the "fake" option...
Those tests were run using a Corei7 920, 6GBs of DDR3 1333, and an Intel 64GB SSD paired with a single GF100 card. The tests were run at 1920x1200 with 4x SSAA and 16xAF.
It's such a major change of gameplay that it's banned in many online multiplayer games (Valve games for instance). FOV is locked at 90 degrees max. It may be great for sims and car games, but I find it annoying otherwise.
Seems like a tradeoff Nvidia wasn't willing to make - (obviously I) dunno about the reasons though. Whatever the die space cost associated with this keep in mind that it's x16 for the whole chip.I doubt that a bit as some beefed up operand collector could feed SP and DP subblocks simultaneously, even if the needed register bandwidth cannot be sustained in all cases. Real code doesn't exist exclusively out of FMAs with 3 source operands.
From the implementation efficiency I would favor two 16 ALU subblocks, which can be chained together with some additional circuitry to enable 16 DP results. Basically similar to what ATI does with the 4 ALUs in a VLIW, just that the nv units have beefier ALUs (most important the multiplier) to start with and one can get away by coupling only two of them.
No one has already heard about this post?
http://www.guildwarsguru.com/forum/rahjas-freeeeeeee-t10420384.html
Any comments? Fake or not in your opinion?
I'm more for the "fake" option...
I've been following this thread from the beginning and are content of just being a reader. However, I wish to clearify something here.
Eyefinity (and TH2G) is not about upping the resolution. Thats not what attracts the majority of the users. Its the change of aspect ratio, which gives more game content that is important. This is what we want to see when using Eyefinity or TH2G:
http://www.widescreengamingforum.com/screenshots/hl2-lc-th2go.php
Going from 1920X1200 in 16:10 to 2560X1600 in 16:10 is "just upping the resolution". It doesn't add to game other then giving a higher resolution image. Going to 48:10 in Eyefinity or TH2G gives you 3 times more game content. You can like it or not, thats subjective, but objectively I don't think you can disagree that seeing 3 times more game content is a major change of the gameplay.
No one has already heard about this post?
http://www.guildwarsguru.com/forum/rahjas-freeeeeeee-t10420384.html
Any comments? Fake or not in your opinion?
I'm more for the "fake" option...
I wrote that based on a 10 sq. mm estimation and ninja edited after a check, which gave about 15 sq. mm.Why do you think that a 15 sq. mm cluster is "too low" for a 60 sq. mm part? GT218 is below 60 sq. mm (also on 40nm), and I very highly doubt its single (apparently 16-way only instead of 24 as the other GT2xx parts) cluster is approaching that size...
So... the 5970 is only 38% faster than the 5870?
Ninjaprime said:Edit: Honestly though, I wouldn't be surprised with the way NV PR has been acting if they came out with some random situation where crossfire scaling didnt work and with some old driver revision that they "think" or "expect" it was that way, even when they really know its not true.
So... the 5970 is only 38% faster than the 5870?
Edit: Honestly though, I wouldn't be surprised with the way NV PR has been acting if they came out with some random situation where crossfire scaling didnt work and with some old driver revision that they "think" or "expect" it was that way, even when they really know its not true.
"No, of course I didn't witness them firsthand. That is why I said, this is in house testing, and it should be treated as if tainted with bias.
However, the person who emailed them to me (and only a small section was emailed) is a Sr. Hardware Engineer at the Santa Clara facility. He shall remain unnamed."
I know that GF104 outperforms the HD5970 hands down, but as for GF100... I just don't know tbh.
Eyefinity (and TH2G) is not about upping the resolution. Thats not what attracts the majority of the users. Its the change of aspect ratio, which gives more game content that is important
No one has already heard about this post?