The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
Why wouldn't Nvidia take the opportunity to further harm the competition?

I think when NV release this year a single core 1terraflop GPU this can K.O. ATi in the enhusiast segment till r700, this won't be good from user aspect, but of course this is business and not a charity event.
 
The real question is how? 55nm process and skyhigh clockspeeds won't be enough in 2008/Q1.

Extra TMU's and ROP's. What else could be wrong with R600 hardware wise apart from that? It seems like a monster otherwise!

Just think... a refresh that doubled TMU's and ROPs and increased shaders by 50%. Maybe a slight core and memory clock increase to boot. With good drivers I think that could slaughter G80.
 
Extra TMU's and ROP's. What else could be wrong with R600 hardware wise apart from that? It seems like a monster otherwise!

Just think... a refresh that doubled TMU's and ROPs and increased shaders by 50%. Maybe a slight core and memory clock increase to boot. With good drivers I think that could slaughter G80.

How this help AA performance?

You mean 480SPU, 32TMU, 32 ROP?

Don't forget when NV release a highend GPU this year than R680 need to compete with this, so noone care anymore if R680 K.O. the G80.
~8 weeks left, so when NV have a highend GPU in they sack its come out of the sack in the next 2-3 weeks, can't hide anymore ;)
 
Last edited by a moderator:
Personally, I think the performance disparity between the 8600 and 8800 GTS is more of anomaly due to process, architecture, and economics. Clearly DX10 required a big step up in transistors and obviously NVIDIA didn't want to hold back in the high end with performance and didn't want to risk a new process, so they created a monster of a chip. The custom design of course helped with the power consumption and thermals. The 8600 sells in a much more price-sensitive market, so die size is not really flexible.

The 80 and 90 nm nodes do not seem like the sweet spot for DX10. At 65 nm NVIDIA can shrink the die size of a high end chip and increase the performance of the mainstream parts. Whether the delta will be quite the same the NV40 era remains to be seen.
Custom design? Is this confirmed? For which part of the chip? Shaders?


Extra TMU's and ROP's. What else could be wrong with R600 hardware wise apart from that? It seems like a monster otherwise!

Just think... a refresh that doubled TMU's and ROPs and increased shaders by 50%. Maybe a slight core and memory clock increase to boot. With good drivers I think that could slaughter G80.
What would stop NV from doing the same thing with their refresh? Sure it could slaughter the G80, but G80 is a year old.
 
Would a multi-gpu single-part be capable of running MultiMonitor?

"Why design yet another monolithic enthusiast-class X if it has no competition but its predecessors?"

By that same token, we'd still be running Intel 8086 CPUs and 3dfx Voodoo 1's. Now do you see why that that question seems so silly to me? It's called technology advancement.

Considering just how long it's taken Intel to get to where they are now from the original 8086 it's not silly at all. Remember the days of the Pentium II when 33-50MHz speed bumps twice a year were like manna from heaven? I sure don't want to go back to those pathetic days. My point is simply this: these companies on their own have little reason to advance technology quickly. It is only pressure from intense competition that causes the rapid technological advancements we've become accustomed to in recent years.

I'm not saying it would be a bad thing for the consumer if Nvidia released a new enthusiast monolithic GPU this fall, I'm saying it would be rather a waste of engineering and capital resources to do so. A throwback to the GX2 (albeit G8x architecture on 65nm) is what's in their best interest.
 
So if Nvidia goes with something like the GX2 path, how do they top that? Add three or four chips together? They will need a new single chip. It's inevitable. So why not do that now?

I still feel that even if it saves them money, it makes them vulnerable and puts them at greater risk of competition. It's in their best interest to put as much distance between their single chip offerings and their competitors.
 
So if Nvidia goes with something like the GX2 path, how do they top that? Add three or four chips together? They will need a new single chip. It's inevitable. So why not do that now?

I still feel that even if it saves them money, it makes them vulnerable and puts them at greater risk of competition. It's in their best interest to put as much distance between their single chip offerings and their competitors.

How is NV vulnerable to competition in the enthusiast segment within any reasonable scenario for a possible R6xx GPU? It doesn't make sense for an R6xx generation GPU to be capable of the huge leap in performance necessary over R600, unless it was designed that way, and no way was any R600 derivative designed to be vastly superior to R600.
 
Two responses to that, re-examine the situations of 3DFX and also the NV30.

Nvidia is not concerned about R600, and should be looking at any possible R700 designs being pulled forward.

I guess its a preference as to how you fight the war. Do you keep your enemies at bay or do you go for the total kill? I'd go for the kill so as to not be bothered by them in the future.
 
Two responses to that, re-examine the situations of 3DFX and also the NV30.

3dfx's demise was certainly due to overconfidence and lack of execution, but NV30 did not lead to the same fate for NV, so unless you're trying to point out the different approaches of different companies I'm not so sure I follow you...

Nvidia is not concerned about R600, and should be looking at any possible R700 designs being pulled forward.

As much as I'd love to see a complete and competitive R700 sooner rather than later, I don't think there's any chance AMD will just be able to "pull it forward" by any significant amount of time, certainly not given their track record over the past few years...

I guess its a preference as to how you fight the war. Do you keep your enemies at bay or do you go for the total kill? I'd go for the kill so as to not be bothered by them in the future.

Intel's doing a far better job of killing off AMD than Nvidia ever could, and some would even argue AMD is doing a far better job of killing off AMD than even Intel.
 
How is NV vulnerable to competition in the enthusiast segment within any reasonable scenario for a possible R6xx GPU?

Are you really suggesting that a company should only improve its product line when it thinks the competition has a chance of producing a competitive product? Silly.

There is no R&D savings to be had by not releasing a new high-end part this fall. The R&D should have been completed a long time ago. By refreshing their lineup all they would accomplish is higher sales, lower costs and solidifying market share and reputation for leadership. Yeah, I can see why they would intentionally give up all that in order to give AMD a chance to catch up.....
 
The argument to why Nvidia doesn't need something better/faster than G80 this years seems to be that R6x0 simply can't compete, and AMD won't have a competive product until R700. But is the R600 architecture really that bad, considering we will soon start to see DX10-optimized titles coming?

Wouldn't a R680 update be able to compete with (and even beat) G80 in titles using a lot of shader power if it only added more TMU and ROP units, especially if they thanks to process improvements could increase the clock speed as well?

My thought was that R600 is really strong in shader performance, but that they underestimated the need for TMU and ROP units?!?

Per
 
After all, the G80s are selling well and given the time, they are probably quite a bit cheaper to produce. Why should they pull out a G90 now, while the G80 is still king-of-the-hill?
You can only sell a G80 card to someone who hasn't already bought one, and the number of people who buy high-end cards is limited. But a G90 card will be bought not only by high-end customers who haven't bought G80, but also by some of those who have and want to upgrade. Hence, you can sell more G90s than you can sell G80s.
 
The argument to why Nvidia doesn't need something better/faster than G80 this years seems to be that R6x0 simply can't compete, and AMD won't have a competive product until R700. But is the R600 architecture really that bad, considering we will soon start to see DX10-optimized titles coming?

Wouldn't a R680 update be able to compete with (and even beat) G80 in titles using a lot of shader power if it only added more TMU and ROP units, especially if they thanks to process improvements could increase the clock speed as well?

My thought was that R600 is really strong in shader performance, but that they underestimated the need for TMU and ROP units?!?

Per

R600's greatest problem is bad manufactucturing process - not it's architecture. HD2400 and HD2600 are very competitive products. R600 at 850MHz (initially mentioned core clock for HD2900XTX) would be just slightly slower than GF8800GTX in todays games.

R680 could be clocked at ~850MHz and I hope this is Orton's 96-ALU GPU (+24TMUs and 24ROPs or 16 uber ROPs). In this case, I wouldn't be surprised if R680 will be 25-35% faster than Ultra.
 
Wouldn't a R680 update be able to compete with (and even beat) G80 in titles using a lot of shader power if it only added more TMU and ROP units, especially if they thanks to process improvements could increase the clock speed as well?
Sadly I think R600's "refresh replacement", if there is such a thing, would have the same TMU and ROP count. It'll be a D3D10.1-specific redesign (which affects TMU and ROP functionality: format versus function orthogonality that's missing from D3D10) plus process migration plus clock bump, aimed at improving margins.

Whatever this chip is going to be, the plan for what it's going to be was set a long time ago. Much like R580, which appeared pretty much on-time with a spec that was refined from R520 but otherwise not "re-targetted due to competitive environment".

Similarly, it seems to me that NVidia's refresh plans for the G80-based architecture were set a long time ago. All we see when the launch of these GPUs comes is some tweaking of clocks versus power. The GX2 nonsense based upon G7x is about as close to competitive retargetting these days (as have been ATI's, ahem, misfires on CrossFire).

Some of these newer games (e.g. Bioshock) are indicating that R600 isn't disadvantaged - discounting AA, which I don't like doing. AA performance is the easiest thing to fix with alternate frame rendering, provided ATI can get to grips with R6xx CF. So it's arguable that R6xx's performance (including refreshes) becomes more competitive as the game landscape evolves, not less. That was Eric Demers's point all along, but the drivers have sent everyone running away. And we aint coming back till R6xx's shit is sorted.

Anyway, it seems to me that NVidia won't really have been able to "slow down" in response to R600, since the lead time clashed with R600's mucho-delayed release. NVidia would have decided to carry on with its 2007Q4 plans by the start of 2007Q2, I imagine - the lead time is too long to keep hanging around, particularly if you're also targetting 65nm.

Jawed
 
G90 might have existed, but perhaps they decided not to go to the tape-out stage and might even decided to put the resources saved into the next generation. If they can do a G80 like stunt to R700 it will hurt ATi much more, then if NV releases a G90 that dominates R600/R680 for a few months. It can alos be that thy have G90 ready, but the press/rumor mill, just missed it so far.
 
Are you really suggesting that a company should only improve its product line when it thinks the competition has a chance of producing a competitive product? Silly.

What are you talking about? Have you missed just how many times I've mentioned a die-shrunken G80 derivative in a GX2-style configuration as a viable high-end SKU?

There is no R&D savings to be had by not releasing a new high-end part this fall. The R&D should have been completed a long time ago.

Uh, yes there is. If they canned the project many moons ago when it became apparent AMD couldn't compete at the high-end and shifted towards a margin-maximization strategy.

By refreshing their lineup all they would accomplish is higher sales, lower costs and solidifying market share and reputation for leadership. Yeah, I can see why they would intentionally give up all that in order to give AMD a chance to catch up.....

See first response.
 
ShaidarHaran, they would have had to have canceled the next high-end project (G90+) before the ATI R600 was even unveiled.

Also, do you think Nvidia is doing to bad or hurting so bad with their 44+ margins? I don't think so. If it's not broken, don't fix it. What got them to that point is having high-end parts.
 
ShaidarHaran said:
Uh, yes there is. If they canned the project many moons ago when it became apparent AMD couldn't compete at the high-end and shifted towards a margin-maximization strategy.

You can always sit on a completed project... you can not, however, just pull one out of thin air.

/step 1: Can R&D on G90.
/step 2: Rehash G80.
/step 3: Profit...
/step 4: Release competitor to R700...
/step 5: Oh #@%& !!!
/step 6: Fire everyone who thought step 1 was a good idea.
 
Status
Not open for further replies.
Back
Top