That's not even not close to ideals, it doesn't make sense at all. Whatever data XS had, I bet their conclusions are wrong. Would be a nice article though someone varying clock / voltage and measuring power draw on a rv670 (and rv770).One characteristic of the RV670 and very possibly 770 is that voltage doesn't matter much relatively to clockspeed (I think the guys on XS concluded ths. ) for overall consumption. Which isn't vaguely close to ideals I know.
One characteristic of the RV670 and very possibly 770 is that voltage doesn't matter much relatively to clockspeed (I think the guys on XS concluded ths. ) for overall consumption. Which isn't vaguely close to ideals I know.
Could you restate what that means?
Are you trying to say:
voltage bumping raise power consumption granted same clock, minimal raise
Self-aware GPUs scare me!
No more ringbus + fixed performance (by way of lots more units while keeping transistor count fairly low) seems to indicate that the ringbus naysayers were right & the ringbus was a waste of transistors?
Do we know which bit of ATI/AMD designed the RV770?
Team A: R300 -> Xenos -> RV770?
Team B: R420 -> R520 -> R600
B3D: With respect to engineering resources its been suggested to us that the “West Coast Team” (Santa Clara - Silicon Valley) has become the main focus for all the PC parts coming from ATI and that now even R500, which we initially understood to be an “East Coast Team” (Marlborough) product, is being designed at Santa Clara. Is it the case that Santa Clara will mainly produce the PC parts now, while Marlborough will be active with “special projects” such at the next X-Box technologies?
We had this concept of the “ping-pong” development between the west and east coast design centres. On paper this looked great, but in practice it didn’t work very well. It doesn’t work well for a variety of reasons, but one of them is the PC architecture, at the graphics level, has targeted innovation and clean sheet innovation and whenever you have separate development teams you are going to, by nature, have a clean sheet development on every generation of product. For one, we can’t afford that and its not clear that it’s the right thing to do for our customers from a stability standpoint. Its also the case that’s there’s no leverage from what the other development team has done, so in some cases you are actually taking a step backwards instead of forwards.
What we are now moving towards is actually a unified design team of both east and west coast, that will develop our next generations of platforms, from R300 to R400 to R500 to R600 to R700, instead of a ping-pong ball between them both. Within that one organisation we need to think about where do we architecturally innovate and where do we not in order to hit the right development cycles to keep the leadership, but it will be one organisation.
If you dissect in, for example, to the R600 product, with is our next, next generation, that development team is all three sites - Orlando, Silicon Valley, Marlborough – but the architectural centre team is in the Valley, as you point out, but all three are part of that organisation.
B3D: Would I be correct in suggesting that mainly Marlborough and Orlando would be the R&D centres – with the design of various algorithms for new 3D parts – while the Santa Clara team would be primarily responsible for implementing them in silicon?
No, because the architecture of the R300 and R500 is all coming from the Valley, but we’ve got great architects in all three sites.
Bob Drebin in the Valley is in charge of the architecture team and so he’s in charge of the development of all the subsequent architectures but he goes out to the other teams key leaders and that forms the basis of the unified architectural team. At an implementation level, you’re right – Marlborough is mainly focused on the “special projects” and that will probably be another 18 to 24 months for them. So the R600 family will mainly be centred primarily in the Valley and Orlando with a little bit from Marlborough, and then the R800 would be more unified.
Jawed said something that sort of echoed my sentiments. The MC is distributed around the edges of the die, and you're going to tell me the ring bus is dead? Maybe technically it's not a ring bus, but the idea is the same.No more ringbus + fixed performance (by way of lots more units while keeping transistor count fairly low) seems to indicate that the ringbus naysayers were right & the ringbus was a waste of transistors?
Perhaps each TU can have short-cycle access to the nearest TU, but it would seem sensible to assume that there are frequent cases where one L1's data set would be in the same locality as another's.It looks like there's a 1:1 relationship between TUs and L1s.
If that were the case, vertex work and synchronization operations would be bottlenecked at one access per cycle for the entire chip, assuming the data shares enable synchronization primitives.Alternatively the TUs can fetch from either vertex cache or global data share. Is it reasonable to presume that at any one time only one TU can fetch from either GDS or VC? If all the TUs could concurrently fetch from VC, say, you'd have a completely stupid crossbar.
Possibly. I can imagine it could be used to emulate CUDA functionality, if AMD chooses to expose it. I'm not sure if it's a good long-term idea, but I dunno. It may be safer to abstract it behind ops that implicitely handle the local storage.So, is local data share an analogue of parallel data cache in G80?
It was the ArtX team, the creators of the R300.
Just kidding.
http://www.beyond3d.com/content/interviews/8/3
is it possible that one RV770 has only "2 ringbus stops" ?
What happened to ATI-Toronto?
Naturally it's 10.1 path for Radeons, 10 for GFs
I hope u can see everything on this die shot
And it does make sense to patch it for Radeon testing? So that their performance is artificially crippled, even if there are no graphic bugs in the unpatched game?... which is apples to oranges. Doesn't make sense to compare those.
What happened to ATI-Toronto?