Please, try to be more discreet about your findings. You might embarrass someone over there.
Heh, well done Pete
Please, try to be more discreet about your findings. You might embarrass someone over there.
3870 FPS, 9800GTX FPS, 4870=1.25 9800GTX FPS, % 4870>3870
Flight Simulator X: 21.5, 31.1, 38.9, +80.8%
Call of Duty 4 32.5, 43.7, 54.6, +68.0%
Test Drive Unlimited 48.4, 65.0, 81.2, +67.7%
Crysis 24.0, 31.9, 39.9, +66.1%
World In Conflict 14.0, 22.4, 28, +100%
Supreme Commander 38.4, 53.7, 67.1, +74.8%
Quake Wars 53.0, 75.1, 93.9, +77.1%
UT3 33.6, 60.2, 75.2, +123.9%
Avg. across all games +82.3%
Here is another post I found that claims RV770XT=~80% over RV670XT
http://forum.beyond3d.com/showpost.php?p=1164052&postcount=1846
Well, revisiting some old HD 2900XT launch reviews, it's performance was all over the map. In a few cases it did seem to provide that 66%-100% jump over X1950XTX, but most times seemed nearer to 50% when it did offer a consistent improvement, but more befuddling was many cases when depending on AA/AF settings and resolution and the individual review it offered almost no improvement at all over it's predecessor. I'd wildly guesstimate if you were to average it out, and again the results seem all over the map depending on review and settings, it would come out a lot closer to 40% improvement if that though.
The HD2900XT at launch often suffered a MASSIVE hit with AA/AF enabled (being say, 50% faster than X1950XTX with no AA/AF, and only 5% faster with AA/AF). It seems they must have gotten that a bit straightened out over time, because I dont see the massive disparity in HD3870 AA/AF versus none scores.
Fuad admits he was mistaken about 4870's bus width. He also still doesn't have much to say, which is surprising as he hasn't provided much of interest lately (except maybe the fan sizes for 4870 and 4850, if that can be considered interesting). Maybe some of his sources were let go.
http://www.fudzilla.com/index.php?option=com_content&task=view&id=7455&Itemid=34
But honestly, it was all very weird to have Pro @ 256 bit and XT @ 512 bit. Some of those news sites have caveats when disclosing less plausible information, like "our sources told us, but take it with a grain of salt", I wish he was a little more circumspect instead of boasting.
wireless technology2) How the #^% did ATi manage to fit a 512-bit MC in a 256mm^2 part on 55nm?
Yet another reason why no one should believe any technical rumors Fuad posts unless you've seen it elsewhere first, or it's just plain common sense. He may have a source or two which gives him legitimate AMD business info, but his tech info is b.s. more often than not.
I mean, I knew HD 4870 wasn't going to feature an external 512-bit memory interface.
1) HD 4850 and 4870 use the same GPU - why would any GPU manufacturer disable half of the memory controller/channels on a native 512-bit part?
2) How the #^% did ATi manage to fit a 512-bit MC in a 256mm^2 part on 55nm?
Common. $%&*ing. Sense.
Forgot #3 - 512-bit combined with GDDR5 is overkill
ATI may have small number of RV770 Graphics Cards for Announcement next month
http://www.pczilla.net/en/post/18.html
Thanks, I was getting affraid I was missing some fundamental part of R600 design.It's not correct what I've said.
Much better:
Just out of curiosity, how big would a die need to be for 512bit? Does that depend on the process used, or is that always the same? And is there some rough (linear) equation to check how big the bus can be for a given size? Or am I asking too many questions? :smile:2) How the #^% did ATi manage to fit a 512-bit MC in a 256mm^2 part on 55nm?
This came from AMD's Rick Bergman. It really got me wondering if AMD was just spreading FUD because they can't get to NVIDIA (could this be similar to Intel's raytracing hype?), or if they really see the future so differently from NVIDIA. My own thinking got me this far:The days of monolithic mega-chips are gone.
Multi-die doesn't need to give better performance than a single die, it just needs to be competitive performance with a competitive price.IMHO, it is very risky and leaves little margin for error when trying to tackle the high end with only multi-die approaches. If the multi-die approach doesn't give better performance than a larger single die, then it won't gain any traction in the market.
Really? So NVIDIA can choose to throw away all the work they did on a large die in favor of two smaller dies? All the R&D work for the large die is irrelevant? No.The biggest problem for AMD is the ability (or lack thereof) to fuel R&D costs required for big "monolithic" GPU's. There's no way they can afford to do that, so they have to tackle the high end with multi-die solutions. NVIDIA has a big advantage in the sense that they will have two options: either single larger die or smaller multi-die approach, and they can choose whatever approach gives them the best performance at any given point in time.