What do you expect for R650

What do you expect for HD Radeon X2950XTX

  • Faster then G80Ultra about 25-35% percent overall

    Votes: 23 16.4%
  • Faster then G80Ultra about 15-20% percent overall

    Votes: 18 12.9%
  • Faster then G80Ultra about 5-10% percent overall

    Votes: 18 12.9%
  • About same as G80Ultra

    Votes: 16 11.4%
  • Slower thenll G80Ultra about 5-10% percent overall

    Votes: 10 7.1%
  • Slower then G80Ultra about 15-25% percent overall

    Votes: 9 6.4%
  • I cannot guess right now

    Votes: 46 32.9%

  • Total voters
    140
What would the odds be that AMDs leakage problem is from all that cache that they integrated into the die? As tightly packed as those transistors are likely to be it seems like a really good spot to do some leaking. The 80HS process probably wasn't designed with memory in mind.
 
It is my opinion that R600 was never meant to be running at these high clockspeeds so soon.
However, they had no choice but to release it as fast as they could, and to cut back on the (very rare) HD2900 XTX'es core clocks (due to thermal issues), leaving the higher GDDR4 frequency as the primary differential factor against the regular HD2900 XT.
That is contrary to the build up of R600 though. The noises of devastatingly high power consuption, the rush to produce power supplies with 4x2 power connectors and supply graphics with 300W, yet the final requirements falling in line with 2 2x3 connectors (the same as G80) suggests that higher clocks where being looked at, not that these were already too high.
 
Last edited by a moderator:
no need to go into deep details to see the big picture. if R650 is only a 65nm speed bumped version of R600 and even if R650 will also include some minor tweaks (and please don't argue with big changes, time is simply too short), I cannot see how it can offer 2x more real gaming performance than R600 to compete with G92... after we can always argue about details and specific benchs where R600/650 macroarchi should be faster than G80/92, but at this time it's pointless.

Why you think g92 has 2x game performance? and why you think NV can't have any trouble with g92?
Last 20 months AMD has almost always trouble with they GPU's, and NV has no problems, but this can be change anytime.
 
NV seem to have learned the lesson of not shooting themselves in the foot with a high-end part debuting on the new process. ATI was more risky as of late (well, they were kinda forced to).

Also, I do still think that nV has had lots of time for playing around with stuff and trying things out, so they do have a certain advantage there, purely because they were under less pressure in comparison.

And again one of my prophetic ass-umptions: I still think that nV will thoroughly kill ATI with the next gen as well because of the reasons mentioned above.
 
Why you think g92 has 2x game performance? and why you think NV can't have any trouble with g92?
Last 20 months AMD has almost always trouble with they GPU's, and NV has no problems, but this can be change anytime.


Think AMD's problems is the agressiviness of their process of choice, not only did they go to 80nm for the r600 their choice of HS version which is the heightest performance process that TSMC has for 80nm, might have had something to do with increasing the problems they faced.
 
Why you think g92 has 2x game performance? and why you think NV can't have any trouble with g92?
Last 20 months AMD has almost always trouble with they GPU's, and NV has no problems, but this can be change anytime.

what a good sync : http://www.beyond3d.com/content/news/230
some blurb from this news:
In recent analyst conferences that were publicly webcast on NVIDIA's website, Michael Hara (VP of Investor Relations) has claimed that their next-generation chip, also known as G92 in the rumour mill, will deliver close to one teraflop of performance. In a separate answer to an analyst's question, he also noted that they have no intention from diverging from the cycle they have adopted with the G80, which is to have the high-end part ready at the end of the year and release the lower-end derivatives in the spring.
I cannot say more publicly but you can already start to make your own opinion ;)
 
Supposedly all ATI GPUs are a common design, mobile and desktop, so the power saving techniques are common to all.
That's too simplistic.
Even with the same design, you can have differences how you can clock gate if the clock speed increases. And a lot of power saving techniques are orthogonal to the front-end design and mostly a back-end affair. A number of new techniques have only recently been introduced. They are very promising, but they have area overhead and reduce speed. Not a big issue for a lot of chips, but definitely a problem for your flagship GPU.
Also, while they may share the same architecture basis and even some of the same RTL code, it can't be as simple as that: it's more realistic that they start with the code base of an existing desktop design, where power was just one of many priorities, and then add tweaks everywhere to improve it because now it's the undisputed number 1 concern.
 
NV seem to have learned the lesson of not shooting themselves in the foot with a high-end part debuting on the new process. ATI was more risky as of late (well, they were kinda forced to).

Also, I do still think that nV has had lots of time for playing around with stuff and trying things out, so they do have a certain advantage there, purely because they were under less pressure in comparison.

And again one of my prophetic ass-umptions: I still think that nV will thoroughly kill ATI with the next gen as well because of the reasons mentioned above.
I think the same in high-end. AMD has no single chance this year if G92 destroys poor R650. Then they could only recover if R700 will be incredible because G100 will be!
However, in mid-range market, the fight will be more difficult for NVIDIA.
 
Last edited by a moderator:
This means nothing about game performance, and you not answared in my other question, oooo i see you are so elite you can't say more :p

Sometimes i feel here as i'm in nvnews forums ;)
no problem if more than twice flop performance doesn't mean nothing for you. I must be stupid :cry: Of course it's not the only factor but still an important one, as it is where G8x was arguably lacking.
 
I think the same in high-end. AMD has no single chance this year if G92 destroys poor R650. Then they could only recover if R700 will be incredible because G100 will be!
However, in mid-range market, the fight will be more difficult for NVIDIA.

Geo's Law #5: "GPUs that are more than 6 months out are ALWAYS incredible."

Let's try to keep the PR level in here down a little bit, please. :smile:
 
When not than you register with another name, and starting again your job.
I wouldn't worry too much about that, we know how to detect that kind of bullshit well enough, thank you very much! :) And let's not get to accusing members of doing these kinds of things before there is any evidence they do or will ever do so, shall we? (but yes, aeryon, please tone it down a bit! Check your PM box...)
 
I wouldn't worry too much about that, we know how to detect that kind of bullshit well enough, thank you very much! :) And let's not get to accusing members of doing these kinds of things before there is any evidence they do or will ever do so, shall we? (but yes, aeryon, please tone it down a bit! Check your PM box...)

Sorry if i go to far, but i can't leave comments alone when someone try suggest that R650/R700 already DOA, and G92/G100 already the president of the universe.
 
When not than you register with another name, and starting again your job.

I wouldn't worry too much about that, we know how to detect that kind of bullshit well enough, thank you very much! :) And let's not get to accusing members of doing these kinds of things before there is any evidence they do or will ever do so, shall we? (but yes, aeryon, please tone it down a bit! Check your PM box...)

ok got it. sorry for the trouble. my mistake but I was thinking personal attack was not allowed on this forum. By the way telling the truth is not PR and I don't work on PR.
 
The reversal of roles might seem pretty amazing, considering how much credit ATI got for staying conservative during the time of their greatest success, R300. Of course, the other part of the story that it was their superior architecture that gave them the luxury of going with a tried and true process while Nvidia had to push the manufacturing envelope in order to make NV3x at least somewhat competitive.

Obviously, IHVs don't just select a given process for the heck of it. Following R300 and R420, ATI's design parameters have all but required high frequencies to remain competitive. As such, pushing the process became essential.

As I recall, Nvidia beat ATI to 130nm by about 6 months; the 110nm parts came within months of each other with ATI being ahead by a month or so. ATI was 3 months ahead to 90nm (would have been 6 or more had R520 not been delayed) and almost 9 months to 80nm.

Here are GPU frequencies for high-end Nvidia cars, starting with NV25:
300MHz, 500MHz, 450MHz, 475MHz, 400MHz, 430MHz, 550MHz, 650MHz, 575MHz.
For ATI, starting with R200:
275MHz, 325MHz, 380MHz, 412MHz, 520MHz, 540MHz, 625MHz, 650MHz, 750MHz.

Nvidia has been bouncing around 500MHz range for 4 years after making that huge 200MHz jump for NV30 while ATI has been on a steady, consistent frequency ramp.

Let’s look at the extremely simplistic “execution difficulty indexâ€￾ which we can get simply by multiplying number of transistors by frequency and normalizing for R300:

Code:
	Freq	Tra #	ED-Index		Freq	Tra #	ED-Index
R200	275	60	0.47		NV25	300	63	0.54
R300	325	107	1		[B]NV30[/B]	500	125	[B]1.80[/B]
R350	380	107	1.17		NV35	450	130	1.68
R360	412	107	1.27		NV38	475	130	1.78
R420	520	160	2.39		NV40	400	222	2.55
R480	540	160	2.48		G70	430	300	3.71
[B]R520[/B]	625	321	[B]5.77[/B]		G70	550	300	4.74
R580	650	384	7.18		G71	650	274	5.12
[B]R600[/B]	750	720	[B]15.53[/B]		[B]G80[/B]	575	690	[B]11.41[/B]

I find it interesting that every single part since R300 which had EDi > 2 of the preceding part has suffered delays.
 
The reversal of roles might seem pretty amazing, considering how much credit ATI got for staying conservative during the time of their greatest success, R300. Of course, the other part of the story that it was their superior architecture that gave them the luxury of going with a tried and true process while Nvidia had to push the manufacturing envelope in order to make NV3x at least somewhat competitive.

Obviously, IHVs don't just select a given process for the heck of it. Following R300 and R420, ATI's design parameters have all but required high frequencies to remain competitive. As such, pushing the process became essential.

As I recall, Nvidia beat ATI to 130nm by about 6 months; the 110nm parts came within months of each other with ATI being ahead by a month or so. ATI was 3 months ahead to 90nm (would have been 6 or more had R520 not been delayed) and almost 9 months to 80nm.

Here are GPU frequencies for high-end Nvidia cars, starting with NV25:
300MHz, 500MHz, 450MHz, 475MHz, 400MHz, 430MHz, 550MHz, 650MHz, 575MHz.
For ATI, starting with R200:
275MHz, 325MHz, 380MHz, 412MHz, 520MHz, 540MHz, 625MHz, 650MHz, 750MHz.

Nvidia has been bouncing around 500MHz range for 4 years after making that huge 200MHz jump for NV30 while ATI has been on a steady, consistent frequency ramp.

Let’s look at the extremely simplistic “execution difficulty indexâ€￾ which we can get simply by multiplying number of transistors by frequency and normalizing for R300:

Code:
	Freq	Tra #	ED-Index		Freq	Tra #	ED-Index
R200	275	60	0.47		NV25	300	63	0.54
R300	325	107	1		[B]NV30[/B]	500	125	[B]1.80[/B]
R350	380	107	1.17		NV35	450	130	1.68
R360	412	107	1.27		NV38	475	130	1.78
R420	520	160	2.39		NV40	400	222	2.55
R480	540	160	2.48		G70	430	300	3.71
[B]R520[/B]	625	321	[B]5.77[/B]		G70	550	300	4.74
R580	650	384	7.18		G71	650	274	5.12
[B]R600[/B]	750	720	[B]15.53[/B]		[B]G80[/B]	575	690	[B]11.41[/B]

I find it interesting that every single part since R300 which had EDi > 2 of the preceding part has suffered delays.


That is interesting. However, note that as NVIDIA has started to expand into multiple clock domains (most prominently with G80), this comparison becomes more complicated since it is based only on core clock frequency.
 
Back
Top