The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
Which TODAY would mean a complete waste of added hardware in order to hide all that latency involved. It took 5 years until GPUs went from a 256 to a 512bit wide bus.

Only 1 year longer than from 128 to 256bit bus, so nothing too strange there imo.
 
Well look at this, how fast has shader power, fillrates, increased comparied to bandwidth?


8500 pro

1100 Mpixels/s
2200 Mtexels/s
8.8 GB/s

9700 pro.

2600 Mpixels/s
2600 Mtexels/s
19.8 GB/s
20.8 gflops (I'm not sure if this is correct please correct me if I'm wrong not sure of the ALU's the r300 used)

Fast forward to today r580

10400 Mpixels/s
10400 Mtexels/s
64 GB/s
360 gflops


Mpixel/s more then doubled from the 8500pro to the 9700pro for a reason, if we don't see that with the r600 that extra bandwidth won't be fully utilized. The ratio of fillrates to bandwidth has been going steadly similiar.
 
If ATI delivers 128 vec4 ALU's and the card performs only 10% faster than the G80, ATI has some explaining to due. Even at *only* 700-800mhz 128 vec4 ALU's would have to be pretty damn innefficient to only outrun 128 scalar ALU's (@ 1350mhz) by 10%. I would say ouch to ATI.


Heh, I forgot it was 128 scalar in G80. For some reason I was thinking 64.

Then figure that they're doubled, and assume a good deal more efficient, and then add ATI might once again be somewhar texture choked, and we get more reasonable competition (which could still allow ATI to be a good deal faster).
 
If you read back carefully enough I questioned the supposed exiting part about the ring stop thingy and NOT the buswidth. If GPUs would go right now beyond 512bits of buswidth I'd have serious reasons to worry about read efficiency. Take the common burst value of either DDR3 or 4 and multiply it with 768 or even 1024 bits and tell me if you win or lose more bandwidth in the end while reading a shitload of useless data.

The solution would be to increase the granulartity of the memory controllers. 32/64 bits per channel would be reasonable.
 
:?:

If you're happy with 256 bits, what exactly is wrong with the memory controller of R580 and how do you expect this to be improved in R600? Do you have reasons to assume it currently doesn't perform at the best possible efficiency?

I never said anything was wrong with 256bit. I guess I'm connecting and relating too much with 9700pro, if you know what I mean. I think what I mean to say is that I'm really excited for all that bandwith which is a direct by-product of the new bus.
 
Theroy? hehe, I would say it's almost certain.. JMHO. At least the scalar part that is. Clcok domains could be anyones guess, but, personally I think having a clock domain is a must when going scalar. :p
Yes, definitely. Without a clock rate things might become difficult. Otoh I'm not so sure, if anything really needs its own and separate clock domain


On-Topic:
What yields, if you combine Xenos-style ALUs and the rumored numbers?
 
I suspect R600 will have quite abit of surprise according to Geo.. unless hes just playing us around with his jedi mind tricks. :D

If they went for the X2900 series moniker, IMO ATi dumped the original planned R600 (not the entire architecture, but the original targeted core/memory clock speed), and went through several more tape outs to reach a higher performing part inorder to fight nVIDIA's refresh (due to the delay) instead of G80.

It would be a total disaster for the marketing team if the X2900 series cannot clearly outperfom the 8800 series in todays benchmarks. So i suspect they heavily increased the core clock speed and tweaked some things (i.e the launch was delayed) instead of trying to rush it out of the door to face nVIDIAs next gen card. (from the rumoured 700mhz quite a long time ago to somewhere around 800~900mhz).

Then you have to wonder what ATi considers nVIDIA "refresh" card.

How long has it been since G80 has launched? 4 months? or 5?
 
Yes, definitely. Without a clock rate things might become difficult.
The design becomes more difficult, mostly because it's not something that most semiconductor companies have been doing, but it can actually make the architecture much more efficient, both in performance and power consumption.
 
I'm still thinking 800MHz*64*(Vec4+1) would make a really interesting part. I'd be surprised if they don't have more than 64 pipes, say like 80 though. Of course with 80 it could damn well be breaking the teraflop barrier. With the 1800->1900 move the ALUs apparently didn't take up very much space. I'd imagine they could do the same, take the savings from going unified and without multiple redundant parts required anymore 80 or 96 pipes would be entirely possible. Besides the marketing department would go crazy with 1TFLOP.

Also consider the G80 refresh is likely gonna be a GX2 setup so if they want to fight with that the chip will have to be a monster.
 
I suspect R600 will have quite abit of surprise according to Geo.. unless hes just playing us around with his jedi mind tricks. :D

If they went for the X2900 series moniker, IMO ATi dumped the original planned R600 (not the entire architecture, but the original targeted core/memory clock speed), and went through several more tape outs to reach a higher performing part inorder to fight nVIDIA's refresh (due to the delay) instead of G80.

It would be a total disaster for the marketing team if the X2900 series cannot clearly outperfom the 8800 series in todays benchmarks. So i suspect they heavily increased the core clock speed and tweaked some things (i.e the launch was delayed) instead of trying to rush it out of the door to face nVIDIAs next gen card. (from the rumoured 700mhz quite a long time ago to somewhere around 800~900mhz).

Then you have to wonder what ATi considers nVIDIA "refresh" card.

How long has it been since G80 has launched? 4 months? or 5?

Err, you realize that 8800 is greater than X2900 right?

:runaway: ZOMG G80 is 3.0344827586206896551724137931034 times faster than the R600! :runaway:

* Natoma breaks out Wavey's frying pan
 
x2900 means 12900 not 102900 :p

But AMD will have plenty of cards with larger numbers.

AMD's ATI Radeon X2900XTX = 1290000 > 8800
AMD's ATI Radeon X2900XT = 129000 > 8800
AMD's ATI Radeon X2900XL = 129000 > 8800

:runaway:
 
But AMD will have plenty of cards with larger numbers.

AMD's ATI Radeon X2900XTX = 1290000 > 8800
AMD's ATI Radeon X2900XT = 129000 > 8800
AMD's ATI Radeon X2900XL = 129000 > 8800

:runaway:

Geforce -> Giga force -> x times 1000.


Therefore, Geforce "8800" = 8800000 > 1290000


:D
 
But AMD will have plenty of cards with larger numbers.

AMD's ATI Radeon X2900XTX = 1290000 > 8800
AMD's ATI Radeon X2900XT = 129000 > 8800
AMD's ATI Radeon X2900XL = 129000 > 8800

:runaway:

You keep adding an extra zero :p It goes Radeon 7000, 8000, 9000, 10000, 11000, 12000 with the last three replaceing the leading one with an x.
 
Dont think people get my point.

Im sure people who buy performance cards ($199 and up) do know the current trend of video cards. Generally it has been the -800 series the high end, -600 series mid end and -300/200 series the low end with -900 being the refresh of that generation of video cards. This is the sort of the trend weve seen from the FX/R300 days to current day now.

Note the word "marketing".
 
You keep adding an extra zero :p It goes Radeon 7000, 8000, 9000, 10000, 11000, 12000 with the last three replaceing the leading one with an x.

Stop trying to bring us back to logic. Logic has no place in this portion. :LOL: The X at the end of it adds a 10. XTX has two (100), XT has one (10), and XL has one (10).
 
Status
Not open for further replies.
Back
Top