The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
I think that's more a factor of the DRAM market than anything ATI consciously did, though. Also, wasn't the limited availability of the X800 XT PE blamed at least in part on the ability of Samsung to supply 1.6ns GDDR3?

Does it matter? You made a blanket statement that bandwidth hasn't increased significantly since R300. Obviously that's not the case. :)
 
Incidentally there's an interesting little post scriptum on one of those Inq links: That might suggest that the 12" or 13" cards that have been rumoured were actually only pre-production prototypes, and that the actual production GDDR4 cards are all sensible sizes...?

Well the card itself, without the cooling+handle, doesn't seem 12" or 13" long in that blurry pic from VR-Zone
 
Well the card itself, without the cooling+handle, doesn't seem 12" or 13" long in that blurry pic from VR-Zone

Tell me about it, it just looks like an x1950xtx with an oversized lead brick attached to it to me...
 
Last edited by a moderator:
Incidentally there's an interesting little post scriptum on one of those Inq links: That might suggest that the 12" or 13" cards that have been rumoured were actually only pre-production prototypes, and that the actual production GDDR4 cards are all sensible sizes...?


12" cards (non-oem)=FUD.

Common sense ATI is not going to realize a card that doesn't fit in most cases.
 
Are you talking about some other company named Nvidia? Cause you couldn't be talking about the one selling GPUs :)

Hmmmm, how much faster does R600 have to be than G80 for us not to spend the next couple months talking about how it's not worth waiting for G81?
 
Considering the comments made on the previous page concerning whether or not the 512-bit memory bus and all of it's resulting bandwidth is necessary, did anyone take a look at the low end and mid end parts?

The lowest end has a 64-bit bus + DDR2; perhaps it turned out to be adequately fast (for the low end) without it? 128-bit with GDDR4 might be okay, I'll leave those conclusions to those more familiar with GDDR4 (which doesn't include me :D ). The low end is low end, but how well will that 128-bit/GDDR4 part perform (assuming any of this is true)?

I'm just wondering why the mid-end and low end cards would end up with such a low amount of bandwidth with the high end part on 512-bit. My conclusion was that maybe it is just as has been speculated; maybe all of it wasn't necessary.

We all know that the enthusiast market is just to draw attention to the products, but the mainstream parts do need to perform at least a little bit, don't they?
 
ATI have commented in the past about the spread of cards released and that as time goes on there would be more steps from low end to high end. So going for 512 bit at the high end doesn't mean that low end won't be 64 bit but rather that now you can have 64,128,256,512 bit cards. This means it should be easier to have a price point for all pockets IMHO.
 
ATI have commented in the past about the spread of cards released and that as time goes on there would be more steps from low end to high end. So going for 512 bit at the high end doesn't mean that low end won't be 64 bit but rather that now you can have 64,128,256,512 bit cards. This means it should be easier to have a price point for all pockets IMHO.

That is a way of looking at it that I hadn't considered. That sounds like it would be a pretty cheap card. Not quite "Oh, I have some change - I could get a pack of gum, or ATI's latest low end", but close.
 
Sorry, are you saying that giving the high-end product more bandwidth than it actually needs somehow gives the midrange product a competitive advantage? :unsure:

No that wasn't my point at all; it's one of the possibilities to take advantage of the wider buswidth range from top to bottom yes, yet not the primary reason to drive such a design decision per se. From what it looks like NV will jump also on the 512bit bandwagon sooner or later, whereby they took an intermediate first step with 384bits. It rather looks like ATI didn't bother for any intermediate sollutions for whatever reasons and went straight ahead for the whole enchilada.

Now pick your poison; either G80 is today bandwidth constricted or R600 has a tad more bandwidth than it actually needs. Both should prove in realtime exaggerations, since it might be true that a GPU can never have enough bandwidth, but in order for that to prove itself in real time the accelerator should never ever hit onto any other bottleneck.
 
I have to wonder if the R600 will even be out before nVidia's next high-end G8x part.

That would be the fastest refresh since NV30->NV35. Looking at the time line of the original G80 introduction a June/July release seems most plausible (given a planned introduction cycle of 1 year for a high end part.)
 
That would be the fastest refresh since NV30->NV35.
Depends. How do we count the 7800GTX 512MiB? Anyway, one of the many reasons why we no longer are on 6 months cycles for the high-end is that there haven't been high-end optical shrinks in a long while, as far as I can see.

G80 has a pretty big advantage there, because it's digital-only. All the analog stuff is on NVIO. Consider what happens with a shrink if some parts scale (digital) and others pretty much don't (analog). You'll pretty much be forced to rework your floorplan. So G80 should be easier to optical-shrink than most GPUs preceding it. That's not the justification for NVIO I'm sure, but it is a positive side effect.

I would indeed be surprised if we suddenly had a very fast refresh cycle again, but optical shrinks are the exception here. In fact, that implies we'd likely see another refresh (GX2-based on a lower-end card?) within the next 6 months, if not less - since this one must hardly be mobilizing their entire engineering team! ;)
 
I have to wonder if the R600 will even be out before nVidia's next high-end G8x part.

Nah, I doubt Nvidia will refresh their first batch of DX10 cards before they've even had a chance to run DX10.

BTW, now that the driver is out are we going to see a DX10 performance expose on G80 at B3D or will we have to wait till the R600 review?
 
Nah, I doubt Nvidia will refresh their first batch of DX10 cards before they've even had a chance to run DX10.
I somehow doubt they care much about that. Furthermore, I doubt they're really eyeing replacing the current line-up; rather complement it with an higher-end part, and cost-reduce the others through the shrink.
BTW, now that the driver is out are we going to see a DX10 performance expose on G80 at B3D or will we have to wait till the R600 review?
I do not believe the drivers are mature enough for us to judge the shader core's performance properly in DX10 yet - so, I think we'll likely want to do that on the R600 launch, possibly comparing the two chips' DX10 performance in the same piece.

Now, DX10 isn't only going to stress the shader core though. The TMUs and ROPs are also probably worth some investigation there, because of all the new resource formats. So I'm hoping we can have a piece on that published well before the R600's launch, along with perhaps a much more detailed analysis of the performance of G80's ROPs and compression capabilities. We'll see - no promises on the timeframes involved, but it's definitely on my TODO list at least! :)
 
I somehow doubt they care much about that. Furthermore, I doubt they're really eyeing replacing the current line-up; rather complement it with an higher-end part, and cost-reduce the others through the shrink.
I do not believe the drivers are mature enough for us to judge the shader core's performance properly in DX10 yet - so, I think we'll likely want to do that on the R600 launch, possibly comparing the two chips' DX10 performance in the same piece.

Now, DX10 isn't only going to stress the shader core though. The TMUs and ROPs are also probably worth some investigation there, because of all the new resource formats. So I'm hoping we can have a piece on that published well before the R600's launch, along with perhaps a much more detailed analysis of the performance of G80's ROPs and compression capabilities. We'll see - no promises on the timeframes involved, but it's definitely on my TODO list at least! :)
:oops: :oops: That would be something :smile:
 
Status
Not open for further replies.
Back
Top