The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
I just hope R600 steps it up a notch. I've been holding off on a monitor upgrade because my current X1900XT isn't up to the task at higher resolutions. The current G80 offerings haven't hit the sweet spot either - the GTS is too cut-down and the GTX form factor is a no-go for me.
 
Yeah I understand that but it's not like we're trying to cure cancer. If DX9+XP is significantly slower than Vista+DX10 on the same task then we can obviously draw the conclusion that Microsoft's promises hold some merit.

That's the problem.. when will we see this same task?
I really doubt it will be an issue though.. either R600 or G80 will be trounced in the DX10 benchmarks and vice versa for DX9.
 
Presumably R600, like G80, will rough up DX9 class cards in DX9 on Vista quite handily. That's a given.

Dunno about form factor there, Trini. Some sources seem to indicate a 9" retail R600, so maybe that works for you. I'm not sure I believe that tho, really. I will when I see a pic that shows that. :smile: What res are you trying to optimize for? 1920x1200? I'm going to guess (and, wow, this is a guess) that I like the odds of R600 winning out comfortably most of the time at 1920x1200 at 8xmsaa and 16xaf just from the pure BW advantage. Hard to say just yet by how much tho. . .and I base that on pure speculation that kicking butt at 2560x1600 would not have been enough justification to do 512-bit and GDDR4.
 
Yep got it on the first try - 1920x1200 :) Good point on the bandwidth advantage. I'm going to be looking at that pretty closely. But like so many other people I'm not in a big rush since there isn't too much on the horizon and I still have a decent backlog of DX9 titles to check out that R580 can handle just fine. Just irks me a bit that I haven't gone widescreen as yet.
 
Yep got it on the first try - 1920x1200 :) Good point on the bandwidth advantage. I'm going to be looking at that pretty closely.

I'm going to be interested to see if NV ends up making the case that 16x CSAA is "as good as" AMD's 8xmsaa. They've probably had a better sense of what R600 is than we do for a longer period of time. CSAA is clearly intended to be a BW-saver. Why do you develop a BW-saver while you are also in the midst of increasing memory bus width by 50%? It seems to me a likely answer is that you are still concerned that you are going to have a bw disadvantage even with your 50% increase, and that it *will* matter unlesss you do something else to try to counter it. We gave CSAA props in our annual awards, as it deserves, but it probably didn't spring from David Kirk's forehead after a bad sushi dinner at Benihana, y'know? There had to be a trigger. So far as I can see, there is no technical reason to have not done CSAA with NV40 or G70. . . so why now?
 
That's the problem.. when will we see this same task?
I really doubt it will be an issue though.. either R600 or G80 will be trounced in the DX10 benchmarks and vice versa for DX9.

Guess we'll see it when somebody codes it......looks like game devs are planning to do more with DX10 paths instead of just running DX9 stuff faster.
 
Why not now ? :D

Because it is counter-intuitive to think, "Well, we're going to increase our bandwidth by 50% this generation. . .clearly what we need is some way to save bandwidth." Could it have happened? Could some line guy have said to his manager "Hey, I've got a nifty idea. . ." out of the blue one day? Sure, it happens.

Edit: And cut a brother a break --I need to get my predictions out of the way sooner rather later at this point. ;)
 
Because it is counter-intuitive to think, "Well, we're going to increase our bandwidth by 50% this generation. . .clearly what we need is some way to save bandwidth." Could it have happened? Could some line guy have said to his manager "Hey, I've got a nifty idea. . ." out of the blue one day? Sure, it happens.

Edit: And cut a brother a break --I need to get my predictions out of the way sooner rather later at this point. ;)

Well, CSAA needs support.
Could they be aiming to promote it with MS for inclusion into the DX10.1 spec, for added benefit at a later date ?
Look at Purevideo vs Purevideo HD, TSAA, ATI's 3Dc, etc...
They could have included it in G80, thinking about a G81 or G90, or am i not being reasonable ? ;)
 
Because it is counter-intuitive to think, "Well, we're going to increase our bandwidth by 50% this generation. . .clearly what we need is some way to save bandwidth."
I'm sorry, that's not counter-intuitive:
  1. GPU designers are always looking for ways to improve bandwidth efficiency. Sometimes it's nothing more than Moore's Law that allows them to put in bandwidth-saving features, because they become relatively cost-effective.
  2. CSAA saves bandwidth and it has the most visual impact in parts of a scene where aliasing is worst (where a pixel contains one edge - when more edges share a pixel then normal MSAA works very well).
CSAA requires a significant change in the post-rasterisation AA-sample generation hardware (now has to work on a 32x32 sparse-sampled grid) and it's a significant increase in functionality for the ROPs. Apart from anything else, AA happens to utilise the sampling grid defined by the Multifunction Interpolator:

http://www.beyond3d.com/forum/showpost.php?p=789268&postcount=49

(B$?)

All in all it's not trivial and seems to me to be an intrinsic part of the G80 architecture, not something that could have been tacked onto G7x.

Jawed
 
Why are gpu designers always looking for ways to improve bandwidth efficiency? Presumably you meant more by that than "gpu designers are always looking for ways to make gpus better", something specific to the importance of bw to overall gpu performance. Maybe even something about the general relationship of fillrate improvements vs bandwidth improvements historically?
 
Why are gpu designers always looking for ways to improve bandwidth efficiency? Presumably you meant more by that than "gpu designers are always looking for ways to make gpus better", something specific to the importance of bw to overall gpu performance. Maybe even something about the general relationship of fillrate improvements vs bandwidth improvements historically?

Maybe it's because both bus and memory technologies are too slow to develop.
I mean, from EDO to GDDR4 the evolution has been pretty slow, when we compare it to the GPU and API tech from each generation.

And since the DRAM market is so volatile, these techniques can prove to be very handy in case of RAM shortages and time to market requirements, at least in the short to medium term business prospects.
A few years ago, when they started R&D for DX9 and beyond, there were no guaranties that GDDR3 or GDDR4 would be cheap enough for consumer high-end and mainstream graphics cards, and here we are today, with the finished (or nearly finished, eh) R6xx and G8x derivatives using it.


It could be a case similar to that of the bean counters at AMD back in 2003 when Intel hit the streets with the DDR2-supporting i915/i925, saying "we will wait for it, DDR2 has yet to prove itself worthy". Indeed, the market entered an unexpectedly slow 2-year transition from DDR-333/400 to DDR2-533/667/800, and that period benefited them greatly in terms of sales, cheaper platform costs due to favorable DDR cost, availability and latency issues next to early DDR2 parts that Intel forced itself into using (hence the long life of i848 and i865).


So, in essence, focus on BW-saving techniques might not be just for design purposes, aiming also for a certain degree of economic future-proofing.
 
Parhelia proved that bw improvements are not a panacea, certainly. But that's more the exception than the rule.
 
comptk6.jpg


Personally I dont like CSAA all that much....


Just give me 12x super sampling AA, and 32xAF HQ ATI and I will love you...
 
R200 - 8.8GB/s

http://www.beyond3d.com/misc/chipcomp/?view=boarddetails&id=52

R300 - 19.8GB/s

http://www.beyond3d.com/misc/chipcomp/?view=boarddetails&id=206

At the same time enhanced bandwidth-saving (as well as shader-operation saving) techniques were emplaced:

http://www.beyond3d.com/reviews/ati/radeon9700pro/index.php?page=page4.inc#hyper

In R600, on top of the extra bandwidth we're assured by GDDR4 and bus-width, there's also improved hierarchical-Z rejection plus a new compression feature for floating-point with AA render targets.

Over the lifetime of R300-R580, the on-die memory dedicated to hierarchical Z has grown, in order to save bandwidth.

etc.

So, it's normal that while increasing bandwidth they are engineering new features to save bandwidth.

Jawed
 
I think RV630, for example, is 128-bit to keep it cheap as possible to produce, even at 65nm, even at the cost of some performance it may or may not need. If the <80mm2 rumor is true, and it competes with the 8600 series which is also supposedly 128-bit, no doubt that chip is a huge winner for ATi, as mainstream is where the $ are. The 8600 series chip could possibly be twice as big...Putting them in the situation Nvidia was last gen with the ability to chop prices if need-be, while also being able to create 'x2'-style cards if need-be to compete.

According to all rumors, speculation and indications, the 8300, low end nVidia parts will be 128-bit. The 8600 Ultra, mainstream parts will be 256-bit. What the 8600GT will have is anyone's guess. According to all rumors, speculation and indications.

http://www.guru3d.com/newsitem.php?id=4834 <----- One of the many different places reporting this rumor.
 
Last edited by a moderator:
According to all rumors, speculation and indications, the 8300, low end nVidia parts will be 128-bit. The 8600, mainstream parts will be 256-bit. The "8600 will be 128-bit" was due to some confusion between 8600 and 8300. According to all rumors, speculation and indications.

Where can i read all those rumors, speculation and indications ? ;)
I am trying to settle between a X1950 Pro 512MB and a 8600, so these things have to be known soon. :D
My NV45 needs rest.
 
I'm going to be interested to see if NV ends up making the case that 16x CSAA is "as good as" AMD's 8xmsaa. They've probably had a better sense of what R600 is than we do for a longer period of time. CSAA is clearly intended to be a BW-saver. Why do you develop a BW-saver while you are also in the midst of increasing memory bus width by 50%? It seems to me a likely answer is that you are still concerned that you are going to have a bw disadvantage even with your 50% increase, and that it *will* matter unlesss you do something else to try to counter it. We gave CSAA props in our annual awards, as it deserves, but it probably didn't spring from David Kirk's forehead after a bad sushi dinner at Benihana, y'know? There had to be a trigger. So far as I can see, there is no technical reason to have not done CSAA with NV40 or G70. . . so why now?
A couple thoughts about CSAA.
  • It reduces memory footprint in addition to BW, though of course it won't look as good as true 16x MSAA.
  • To work well it needs 4x AA to be fast.
 
Where can i read all those rumors, speculation and indications ? ;)
I am trying to settle between a X1950 Pro 512MB and a 8600, so these things have to be known soon. :D
My NV45 needs rest.

Editted my post to include links, apparently while you were posting your comments :). I had originally had my comments include the 8600GT, which I believe will also have a 256-bit bus, but I can't prove that anyone else is rumor-mongering that, so I won't say it.

For the record, if the 8600 Ultra has the specs it is projected to have, 512MB RAM on 256-bit bus, all for ~$200, that will be my next card. I have unexpectedly moved to a SFF platform, so a GTX-ish size card will be unsuitable for me. That also kicks out any handle possessing R600 cards, but I wouldn't have purchased something that big for my tower either (unless it was really cheap and fast).

I am also interested to see what R600 will bring in the ~$200 - $275 price range. That's what I am willing to spend for a DX10 part. I can't wait forever, though - whoever gets a good DX10 part out in that price range first is getting my business, unless there is some compelling reason not to do so, like it's 15 inches long or takes up three slots.
 
Status
Not open for further replies.
Back
Top