The R7xx Architecture Rumours & Speculation Thread

I don't think it's impossible to say thet R700 could be 45nm. With 45nm production expected this spetember for TSMC, and R700 to come in say... May or June next year. Possibly with RV730 and RV710 on a 40nm process?
As I understand it, N45 is effectively a "40nm" process (transistors are drawn in "45nm", followed by an optical shrink). There's no advertised option to skip the shrink.

Production ramps of N45 wafers seem unlikely before this time next year, and that's somewhat optimistic.
 
I thought, Xenos/C1 is the reincarnated R400 with some changes. The R600 is R400's evolution with some R520's ideas. I don't think, it's a coincidence, that all three chips have a tesselation unit.

I expect two thing: smaller threads and wider architecture with smaller SIMDs.

I guess, it's time for the famous ArtX legend: R200 sucked, R400 sucked, R600 sucks. ArtX' R300 was wow, R520/R580 were wow, R700 will be wow! :LOL:

ATI R200 was not so bad against Geforce-3
R420 was based on R300/R350 design
R520 for me was NOT wow, but R580 was superb! :) very good....
R600 no comments needed:

R700 simply unknown at this point.
 
What do you mean by this? You mean in the sense of one chip, or you mean in the sense of however it's configured it won't be performance competitive with the best the other fellows can offer as a board-level offering?

I mean that they won't be visible on the high-end side of things at all. I think R600 finished them off and only, ONLY if the initial steps of the R700 look very good from the get-go and there are no troubles whatsoever will we see a high-end chip. If it turns out as buggy and demanding as R600, it'll get axed immediately and they'll just do the mid-to-low end and the mainboard-gfx stuff.

Actually that is kinda what they already did with this gen, opting for not pushing to produce the competitor for the G80/Ultra. And now they're even further behind the curve compared to nV.
 
As I understand it, N45 is effectively a "40nm" process (transistors are drawn in "45nm", followed by an optical shrink). There's no advertised option to skip the shrink.
I'm really not sure what you mean there. 40nm is an optional half-node to 45nm, just like half-nodes have always been... This is the most recent roadmap I've seen floating around: http://www.notforidiots.com/TSMC.gif

Interestingly, the half-node is 40LP, not 40GS! This is the first time I've seen a LP half-node, and it's also the first there is an eDRAM half-node. It's likely that 40GS and/or 40LPG simply aren't on that roadmap yet, but this is quite interesting either way.
 
Actually that is kinda what they already did with this gen, opting for not pushing to produce the competitor for the G80/Ultra. And now they're even further behind the curve compared to nV.

I get the bitterness and dissapointment, really I do. . .but it's hard to see a $399 board as anything other than "high end". If you mean to suggest that AMD won't offer a board above the $250 price point for the R7x gen, then I think there isn't a chance in hell of that happening.
 
If R700 consists of two chips, are we likely to see a 1-chip graphics card launched at the same time? What would that be called?

Does the "R670: consisting of 2x RV670" naming that I've seen bandied about imply that R700 consists of 2x RV700?

Jawed
 
Yes, the new strategy of AMD is to launch a high-end and a mid-range SKU every 6 months.
But I do not think the mid-range chip is called RV700.
 
I get the bitterness and dissapointment, really I do. . .but it's hard to see a $399 board as anything other than "high end". If you mean to suggest that AMD won't offer a board above the $250 price point for the R7x gen, then I think there isn't a chance in hell of that happening.

Geo, I'm just talking about having the f4stest3st chip, a clear winner either feature- or speed-wise or both (like nV has right now). They gave that up long time ago, or better they didn't make it on time ever since R300. Why should that change for the better now, as they stand worse than ever?
 
Geo, I'm just talking about having the f4stest3st chip, a clear winner either feature- or speed-wise or both (like nV has right now). They gave that up long time ago, or better they didn't make it on time ever since R300. Why should that change for the better now, as they stand worse than ever?

Was R420 late? I thought it was on time and competitive has well as arguably faster.

R520 was 3 months late due to some bug? Other than that its refresh came before the 7900GTX. Both the 1800xt/x1900xt were faster than the 7800/7900gtx respectively as well.

Yep, R600 was late for sure. Due what ever reasons. I don't think this effects their road map in any major way. From the looks of things, G92 might be a performance part with some architectural improvements over G80 to fill in the gap between the 8600 and 8800. If that's the case, the 2900pro will either be there waiting for it, or right on it's heels.
 
I'm really not sure what you mean there. 40nm is an optional half-node to 45nm, just like half-nodes have always been... This is the most recent roadmap I've seen floating around: http://www.notforidiots.com/TSMC.gif

Interestingly, the half-node is 40LP, not 40GS! This is the first time I've seen a LP half-node, and it's also the first there is an eDRAM half-node. It's likely that 40GS and/or 40LPG simply aren't on that roadmap yet, but this is quite interesting either way.

What I meant is that from what I've seen, the half-node shrink is implicit. Everything is drawn in 45nm, but the spice models (as an example) are for 40nm devices.
 
Geo, I'm just talking about having the f4stest3st chip, a clear winner either feature- or speed-wise or both (like nV has right now). They gave that up long time ago, or better they didn't make it on time ever since R300. Why should that change for the better now, as they stand worse than ever?

And yet what you actually said was:

I mean that they won't be visible on the high-end side of things at all.

Hence my wanting to clarify. I think most people would not get the top feeling from the bottom text.

Tho I think we're entering an age where it's going to make less sense to talk about "fastest chip" and more sense to start talking about "fastest boards" instead.

In the fall of 2003, I would not have been willing to bet that NV40 was around the corner. I'm certainly not willing to bet right now that ATI can reclaim the performance or features crown next time. I'm a little surprised you're sure they can't, but you're entitled to your view.
 
If R700 is targeted for a dual-chip package/board, how would it be managed?
Would it be treated as crossfire on a board, or something more closely integrated?

I keep thinking back to how Opteron manages it, with the essentially transparent access to remote memory through the link between two chips.

Crossfire, if I recall correctly, explicitely sets out private and shared memory space on the separate boards.

If the chips were on the same package, remote memory access would be much lower latency than a connection bridging two cards.
There would be less need for private memory spaces containing duplicate data for the sake of minimizing latency.
At the same time, board space and routing concerns would make the more efficient use of RAM important.
Shared access would mean there wouldn't be private memory pools with duplicate data either cutting into memory usage or requiring expensive amounts of RAM crammed onto one PCB.

If the access were transparent, it would also allow the dual-chip cards to be transparently (maybe) run in Crossfire mode with other cards.

Of course, the extra communications on such a high-bandwidth workload would require a memory controller design that is somewhat over-engineered for a single chip...
 
Crossfire, if I recall correctly, explicitely sets out private and shared memory space on the separate boards.
I suspect R600 CF already unifies the memory space attached to all the GPUs.

Bandwidth between the two pools of memory then becomes a key parameter. Note the double-bridge for R600 CF. How much bandwidth is that?

Jawed
 
The dual CF bridge arrived back in the first RV570 boards, where each link is 12-bit wide -- something to match and exceed the capacity of the former external composing engine (the one with the silly cabling). Probably it is only used nothing more, but to pass faster screen data around.
As for the CF efficiency in R600 setups, I think it's the reduced overhead managing the multi-GPU load, probably related to the command processor, but as a pure assumption, this can be stretched to say some virtualization, coherency and general memory loading improvements.
 
And yet what you actually said was:

Yeah well, high-end this gen was the G80 GTX/Ultra, GTS was high-midrange and that's where ATI landed with R600. I don't think they'll do better suddenly after trailing nV since 3 generations in both timing and features.

In the fall of 2003, I would not have been willing to bet that NV40 was around the corner. I'm certainly not willing to bet right now that ATI can reclaim the performance or features crown next time. I'm a little surprised you're sure they can't, but you're entitled to your view.

NV had one failure only, while ATI have such ever since R420 (or let's say since they had to dump R400). Then came R420 (missing features), R520 (much too late and too slow), then R580 (should have been there instead of R520, alas too little too late). R590 was nice, but arrived too late to change much. And then came R600, which topped it all by some margin. So please tell me what makes you think that they could come up with a magic chip out of nowhere in this situation? Based on what tech? And also, do you think that they could do anything that would catch nV off guard? As I mentioned many times, nV had time to relax, sit back and test fresh ideas and evalute new solutions while ATI had to struggle to get the products out. Just the last round since G80 gave nV at least 6 months lead in R&D, let alone the financial differences.
 
My main concern is how they are going to handle the DUAL R700 for the high end card. If it isn't significantly more integrated/clever than current SLI/Crossfire Dual GPU systems, it isn't going to be a card I could even remotely consider buying.

Current DUAL GPU solutions can't handle multi-monitor situations when in SLI/Crossfire mode. That right there is a deal killer for me and various people I know. And the main reason I'll never go back to SLI/Crossfire until multi-monitor works in those modes.

It'd be interesting to see how AMD plans on addressing this and how integrated those GPUs will be in the high end card.

If they can get it working as if it was a single non-SLI/Crossfire card that handles multi-monitors while both GPUs are engaged in rendering, then it'll still be a consideration for my next machine along with whatever Nvidia's next card is.

However, if not, then ATI can kiss my dollars good-buy until they do and it's back to Nvidia by default.

And, unlike most, I'm really hoping ATI doesn't go back to fixed hardware accelerated AA. I rather like the flexibility of the shader based AA. I'm hoping that they are working on more custom AA filters. The recently released edge detect is quite nice, although it's too slow in a few games I run.

Regards,
SB
 
If they can get it working as if it was a single non-SLI/Crossfire card that handles multi-monitors while both GPUs are engaged in rendering, then it'll still be a consideration for my next machine along with whatever Nvidia's next card is.

They'll have to, the same as everybody else, because the age of the single monolithic GPU chip is coming to an end, just as it is for CPUs. Whether it's multiple cores on the same die, or multiple dies on the same package, it's the direction things are going in, and intercommunication will become more important than ever.

AMDs ring bus and Hypertransport technology may well help them with this in the future, and might even be easier to implement on a single package/die than on two separate cards over the PCIe bus or connectors. Everything's getting so modular, I wouldn't be surprised to see a multicore (four? eight cores?) GPU-only solution on a PCIe card that looks pretty much the same as a Fusion CPU that might (in the mainstream market) be mostly CPU cores + 1 or 2 GPU cores.
 
This thread is exclusively for the discussion of the R7xx architecture, and as such it will have higher quality standards and should be more technical than SKU-specific threads.

Rumoured Data Points
- Evolutionary step of the R6xx architecture.
- Not the first Radeon family with DX10.1 support (that's RV670).
- First chip will be on 55nm according to AMD, same process as RV670?
...

it might be safe to add FP64 to the list?
 
Back
Top