Which TODAY would mean a complete waste of added hardware in order to hide all that latency involved. It took 5 years until GPUs went from a 256 to a 512bit wide bus.
Only 1 year longer than from 128 to 256bit bus, so nothing too strange there imo.
Which TODAY would mean a complete waste of added hardware in order to hide all that latency involved. It took 5 years until GPUs went from a 256 to a 512bit wide bus.
If ATI delivers 128 vec4 ALU's and the card performs only 10% faster than the G80, ATI has some explaining to due. Even at *only* 700-800mhz 128 vec4 ALU's would have to be pretty damn innefficient to only outrun 128 scalar ALU's (@ 1350mhz) by 10%. I would say ouch to ATI.
If you read back carefully enough I questioned the supposed exiting part about the ring stop thingy and NOT the buswidth. If GPUs would go right now beyond 512bits of buswidth I'd have serious reasons to worry about read efficiency. Take the common burst value of either DDR3 or 4 and multiply it with 768 or even 1024 bits and tell me if you win or lose more bandwidth in the end while reading a shitload of useless data.
If you're happy with 256 bits, what exactly is wrong with the memory controller of R580 and how do you expect this to be improved in R600? Do you have reasons to assume it currently doesn't perform at the best possible efficiency?
Yes, definitely. Without a clock rate things might become difficult. Otoh I'm not so sure, if anything really needs its own and separate clock domainTheroy? hehe, I would say it's almost certain.. JMHO. At least the scalar part that is. Clcok domains could be anyones guess, but, personally I think having a clock domain is a must when going scalar.
What's so exciting about it, other than that it's 512bits wide?
The design becomes more difficult, mostly because it's not something that most semiconductor companies have been doing, but it can actually make the architecture much more efficient, both in performance and power consumption.Yes, definitely. Without a clock rate things might become difficult.
I suspect R600 will have quite abit of surprise according to Geo.. unless hes just playing us around with his jedi mind tricks.
If they went for the X2900 series moniker, IMO ATi dumped the original planned R600 (not the entire architecture, but the original targeted core/memory clock speed), and went through several more tape outs to reach a higher performing part inorder to fight nVIDIA's refresh (due to the delay) instead of G80.
It would be a total disaster for the marketing team if the X2900 series cannot clearly outperfom the 8800 series in todays benchmarks. So i suspect they heavily increased the core clock speed and tweaked some things (i.e the launch was delayed) instead of trying to rush it out of the door to face nVIDIAs next gen card. (from the rumoured 700mhz quite a long time ago to somewhere around 800~900mhz).
Then you have to wonder what ATi considers nVIDIA "refresh" card.
How long has it been since G80 has launched? 4 months? or 5?
Err, you realize that 8800 is greater than X2900 right?
ZOMG G80 is 3.0344827586206896551724137931034 times faster than the R600!
* Natoma breaks out Wavey's frying pan
But 8800 is smaller than 102900!
x2900 means 12900 not 102900
But AMD will have plenty of cards with larger numbers.
AMD's ATI Radeon X2900XTX = 1290000 > 8800
AMD's ATI Radeon X2900XT = 129000 > 8800
AMD's ATI Radeon X2900XL = 129000 > 8800
Well that's just up to how you want to decipher it, i mean, it's the only roman number there so it can be seen as it's own entity, 10, and then add 2900 behind it, 102900x2900 means 12900 not 102900
But AMD will have plenty of cards with larger numbers.
AMD's ATI Radeon X2900XTX = 1290000 > 8800
AMD's ATI Radeon X2900XT = 129000 > 8800
AMD's ATI Radeon X2900XL = 129000 > 8800
You keep adding an extra zero It goes Radeon 7000, 8000, 9000, 10000, 11000, 12000 with the last three replaceing the leading one with an x.