vertex_shader
Banned
Just because G80 is faster than R600 doesn't mean ATI won't solve that soon.
The real question is how? 55nm process and skyhigh clockspeeds won't be enough in 2008/Q1.
Just because G80 is faster than R600 doesn't mean ATI won't solve that soon.
Why wouldn't Nvidia take the opportunity to further harm the competition?
The real question is how? 55nm process and skyhigh clockspeeds won't be enough in 2008/Q1.
Extra TMU's and ROP's. What else could be wrong with R600 hardware wise apart from that? It seems like a monster otherwise!
Just think... a refresh that doubled TMU's and ROPs and increased shaders by 50%. Maybe a slight core and memory clock increase to boot. With good drivers I think that could slaughter G80.
Custom design? Is this confirmed? For which part of the chip? Shaders?Personally, I think the performance disparity between the 8600 and 8800 GTS is more of anomaly due to process, architecture, and economics. Clearly DX10 required a big step up in transistors and obviously NVIDIA didn't want to hold back in the high end with performance and didn't want to risk a new process, so they created a monster of a chip. The custom design of course helped with the power consumption and thermals. The 8600 sells in a much more price-sensitive market, so die size is not really flexible.
The 80 and 90 nm nodes do not seem like the sweet spot for DX10. At 65 nm NVIDIA can shrink the die size of a high end chip and increase the performance of the mainstream parts. Whether the delta will be quite the same the NV40 era remains to be seen.
What would stop NV from doing the same thing with their refresh? Sure it could slaughter the G80, but G80 is a year old.Extra TMU's and ROP's. What else could be wrong with R600 hardware wise apart from that? It seems like a monster otherwise!
Just think... a refresh that doubled TMU's and ROPs and increased shaders by 50%. Maybe a slight core and memory clock increase to boot. With good drivers I think that could slaughter G80.
Would a multi-gpu single-part be capable of running MultiMonitor?
"Why design yet another monolithic enthusiast-class X if it has no competition but its predecessors?"
By that same token, we'd still be running Intel 8086 CPUs and 3dfx Voodoo 1's. Now do you see why that that question seems so silly to me? It's called technology advancement.
So if Nvidia goes with something like the GX2 path, how do they top that? Add three or four chips together? They will need a new single chip. It's inevitable. So why not do that now?
I still feel that even if it saves them money, it makes them vulnerable and puts them at greater risk of competition. It's in their best interest to put as much distance between their single chip offerings and their competitors.
Two responses to that, re-examine the situations of 3DFX and also the NV30.
Nvidia is not concerned about R600, and should be looking at any possible R700 designs being pulled forward.
I guess its a preference as to how you fight the war. Do you keep your enemies at bay or do you go for the total kill? I'd go for the kill so as to not be bothered by them in the future.
How is NV vulnerable to competition in the enthusiast segment within any reasonable scenario for a possible R6xx GPU?
You can only sell a G80 card to someone who hasn't already bought one, and the number of people who buy high-end cards is limited. But a G90 card will be bought not only by high-end customers who haven't bought G80, but also by some of those who have and want to upgrade. Hence, you can sell more G90s than you can sell G80s.After all, the G80s are selling well and given the time, they are probably quite a bit cheaper to produce. Why should they pull out a G90 now, while the G80 is still king-of-the-hill?
The argument to why Nvidia doesn't need something better/faster than G80 this years seems to be that R6x0 simply can't compete, and AMD won't have a competive product until R700. But is the R600 architecture really that bad, considering we will soon start to see DX10-optimized titles coming?
Wouldn't a R680 update be able to compete with (and even beat) G80 in titles using a lot of shader power if it only added more TMU and ROP units, especially if they thanks to process improvements could increase the clock speed as well?
My thought was that R600 is really strong in shader performance, but that they underestimated the need for TMU and ROP units?!?
Per
Sadly I think R600's "refresh replacement", if there is such a thing, would have the same TMU and ROP count. It'll be a D3D10.1-specific redesign (which affects TMU and ROP functionality: format versus function orthogonality that's missing from D3D10) plus process migration plus clock bump, aimed at improving margins.Wouldn't a R680 update be able to compete with (and even beat) G80 in titles using a lot of shader power if it only added more TMU and ROP units, especially if they thanks to process improvements could increase the clock speed as well?
Are you really suggesting that a company should only improve its product line when it thinks the competition has a chance of producing a competitive product? Silly.
There is no R&D savings to be had by not releasing a new high-end part this fall. The R&D should have been completed a long time ago.
By refreshing their lineup all they would accomplish is higher sales, lower costs and solidifying market share and reputation for leadership. Yeah, I can see why they would intentionally give up all that in order to give AMD a chance to catch up.....
ShaidarHaran said:Uh, yes there is. If they canned the project many moons ago when it became apparent AMD couldn't compete at the high-end and shifted towards a margin-maximization strategy.