“I read the message onlineâ€￾ The R600 is still scheduled for Q1 2007

Shtal

Veteran
Well Q1 lasts about 3 months; it could be in March 2007.
So far ATI is keeping quite about R600, it could be two reasons.
A. ATI repeating history example: R520
B. The chip is going to be like – worthy/success R580 48 pixel shader design.
If it is actually true; late February / early March time frame, Nvidia would respond quick; just like from G70 to G71 transition.
By then, it will take sometime for ATI to respond to G81 with R650 - Looks to me at the moment ATI is playing catch-up game.

If ATI really wants to win – they have to be more aggressive, just like Nvidia. Come on ATI/AMD get out by January the latest. I just don’t know why ATI’s update cycle changed vs. Nvidia is out already.

So far everybody’s hope is ATI has something Big, I mean BIG hidden up their sleeves and they won’t share it with us. Some people think it is another R300, some might be opposite but who knows. ATI has last chance to recover from their mistakes they made previously. I will give Nvidia a big credit for recovering quickly from nv30 disaster.
I’ am not saying ATI is in trouble from the past or present, but they are not in same sweet spot like back in 2002 “R300 timeâ€￾.
 
Well Q1 lasts about 3 months; it could be in March 2007.
So far ATI is keeping quite about R600, it could be two reasons.
... from the past or present, but they are not in same sweet spot like back in 2002 “R300 time”.

Psycho Killer. 2nd verse. 2nd line.
 
Last edited by a moderator:
I see where you had catch me in a sentence; the reason I said that is because R520 was not a disaster vs. NV30.

A. you could still run DX9.0 SM2.0/SM3.0 games
It did not had same troubles as appose to NV30. All you need is to customize little bet R520. And it was a good chip. If ATI would released sooner it would be very compatitive in that time frame.
 
I believe he's meaning You're talkin' a lot, but you're not sayin' anything. The Fa-fa-fa, fa fa fa-fa-fa fa? is the refrain.

I learned that from politics or Representatives of something.
My current job is similar.
 
I believe he's meaning You're talkin' a lot, but you're not sayin' anything. The Fa-fa-fa, fa fa fa-fa-fa fa? is the refrain.

Ok then, I would like to say directly and clearly what I learned!
ATI/AMD R600 features 64 Shader 4-way SIMD units; this is a very different and complex approach compared to Nvidia's relatively simple scalar shader units. Since R600 SIMD Shader can calculate result of four scalar units, it yields with scalar performance of 256 units - while G80 comes with 128 "real" scalar units. We are heading for very interesting results in DirectX10 performance, since game developers expect that G80 will be faster in simple instructions and R600 will excel in complex shader arena.
The R600 might end up very complex design in a different way vs. G80, similarly as R580 chip; the R600 is supposed to be able to handle more intense and complex visual dynamics better vs. NVIDIA opponent. Based on the R600 information right now: it can render 16 pixels @700 - 800MHz as opposed to G80 24 pixels @ 575MHz. In the end 16x750 is smaller vs. 24x575, but not by much. Please don’t forget that R580 also has 16 ROPs as opposed to Nvidia G71 16 ROP and 24 texture clock and it doesn’t make the ATI card perform lower vs. NVIDIA. The real difference is within texture units; R600 has 64 complex shader units - each composed by 4 simple units: 4x64=256 simple shader units.
Geforce-8800GTX has 128 simple shader units but about double the clock - 1350MHz. Which one / who is better: 256shaders@800MHz or 128@1350MHz?

Of the topic: I also want to mention about memory chips for R600, they are arranged in a similar manner as of the G80, but each memory chip has its own 32-bit wide physical connection to the chip's Ring-Bus memory interface. Memory bandwidth will therefore range from anywhere GDDR4 1.1GHz-1.2GHz; this leaves the G80 series in the dust, at least as far as marketing is concerned. Of course, G80 86GB/s sounds a lot, but it’s nothing when it compared to 140GB/s - at least expect to see that for real.
 
Last edited by a moderator:
Ok then, I would like to say directly and clearly what I learned!
ATI/AMD R600 features 64 Shader 4-way SIMD units; this is a very different and complex approach compared to Nvidia's relatively simple scalar shader units. Since R600 SIMD Shader can calculate result of four scalar units, it yields with scalar performance of 256 units - while G80 comes with 128 "real" scalar units. We are heading for very interesting results in DirectX10 performance, since game developers expect that G80 will be faster in simple instructions and R600 will excel in complex shader arena.
The R600 might end up very complex design in a different way vs. G80, similarly as R580 chip; the R600 is supposed to be able to handle more intense and complex visual dynamics better vs. NVIDIA opponent. Based on the R600 information right now: it can render 16 pixels @700 - 800MHz as opposed to G80 24 pixels @ 575MHz. In the end 16x750 is smaller vs. 24x575, but not by much. Please don’t forget that R580 also has 16 ROPs as opposed to Nvidia G71 24 and it doesn’t make the ATI card perform lower vs. NVIDIA. The real difference is within texture units; R600 has 64 complex shader units - each composed by 4 simple units: 4x64=256 simple shader units.
Geforce-8800GTX has 128 simple shader units but about double the clock - 1350MHz. Which one / who is better: 256shaders@800MHz or 128@1350MHz?

Of the topic: I also want to mention about memory chips for R600, they are arranged in a similar manner as of the G80, but each memory chip has its own 32-bit wide physical connection to the chip's Ring-Bus memory interface. Memory bandwidth will therefore range from anywhere GDDR4 1.1GHz-1.2GHz; this leaves the G80 series in the dust, at least as far as marketing is concerned. Of course, G80 86GB/s sounds a lot, but it’s nothing when it compared to 140GB/s - at least expect to see that for real.
Didn't I read that almost verbatim at the Inquirerererer a while back?
 
Back
Top