Welcome, Unregistered.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

 
Old 16-Oct-2006, 13:32   #1
Arun
Unknown.
 
Join Date: Aug 2002
Location: UK
Posts: 4,934
Default The ATI R600 Rumours & Speculation Centrum

This thread is the new "official" place to discuss ATI's upcoming R600 chip, plus all related products and technologies, at least for the time being. The following points should be kept in mind:
  • The thread will be moderated to improve the signal-to-noise ratio slightly
  • All off-topic discussions will be moved to other threads, or deleted, at our discretion
  • This first post will be edited over time to link to related sources and leaks
  • Further "summaries" may be based on concensus or apparent credibility, with minority reports generally noted
No matter how many stupid rules I come up with, the goal still is to have a good and active discussion going, so get started guys!

Summary
D3D10, 500M+ transistors, release sometime between January and the end of Q1 2007, certainly 80nm. Based on GDDR4, vast numbers of reports claiming a 512-bit external bus, although only midly reliable. Minority reports for 256-bit bus (although sometimes from more reliable sources), or a dual-chip solution (256-bit bus per chip, 2 chips per board).

The vast majority believe in the existence 64 unified "real pipelines" (4x16 ala Xenos' 3x16, although perhaps with different units internally). There are minority reports for 80 and 96. It is unknown what the exact organization of the pipelines will be. Xenos is Vec4+Scalar MADD per "pipeline". There are minority reports of a similar organization, as well as those of a more R580-like one, and finally also some reports or fully scalar units (4xScalar MADD + ???). None of them seem particularly more credible than the others at this point.

As per Direct3D10's requirements, native FP16 filtering should be supported, which should make Farcry HDR function on R6xx even with the original NVIDIA patch, at least in theory. Furthermore, little information is known on the AA and AF algorithms, with most believing they will be mostly similar to R580's. Minority reports of 8x MSAA support exist, however.

Reliable performance predictions are currently inexistant at this point, as only sensationalistic tabloid sites have been claiming any relative or absolute figure. It is likely that the chip will be much faster in Vertex Shading than R580, as per its unified nature. It is however unknown how high the triangle setup figures will be, as well as some other related ones such as attribute fetch. These might or might not become new bottlenecks in modern industry benchmarks. It is also unknown whether the number of TMUs and ROPs has increased from the R580's 16 each.

The power and heat figures of the R600 have been initially reported as being roughly "150W+", but recent minority reports have increased that number to 200-250W. Other older minority reports also mentionned 100-150W instead. Nothing reliable is known regarding heat dissipation and related technologies to minimize the problem, should these figures be accurate.

Links
The old R600 thread: http://www.beyond3d.com/forum/showthread.php?t=31049
Arun is offline  
Old 16-Oct-2006, 13:58   #2
incurable
Member
 
Join Date: Apr 2002
Location: Germany
Posts: 547
Default

I'm wondering about the cooling solution used, should R600 really go up into the 200W+ range, given the limitations on space available and the fact that -in a much less restrictive environment- even cooling a 130W Prescott was considered far from trivial.

Any word on a possible watercooling option on these parts (beyond the Sapphire Toxic special SKUs), as some have suggested available for G80?

Also, beyond R600 itself, I was wondering if there was anything known about its smaller brethren.
incurable is offline  
Old 16-Oct-2006, 15:13   #3
Geo
Mostly Harmless
 
Join Date: Apr 2002
Location: Uffda-land
Posts: 9,156
Send a message via MSN to Geo
Default

At one time it was being speculated that unification would be such a transistor-saver for low and low-mid that we might see a whole family if not simultaneously, then something close to it.

But I don't think we've heard anything about the rest of the family. The roadmap that mentions R600 in Q1 does not mention any others. . . but is labeled "High-end CrossFire Roadmap", so you wouldn't expect it to. . .
__________________
"We'll thrash them --absolutely thrash them."--Richard Huddy on Larrabee
"Our multi-decade old 3D graphics rendering architecture that's based on a rasterization approach is no longer scalable and suitable for the demands of the future." --Pat Gelsinger, Intel
"Christ, this is Beyond3D; just get rid of any f**ker talking about patterned chihuahuas! Can the dog write GLSL? No. Then it can f**k off." --Da Boss
Geo is offline  
Old 16-Oct-2006, 15:20   #4
Jawed
Regular
 
Join Date: Oct 2004
Location: London
Posts: 9,955
Send a message via Skype™ to Jawed
Default

Well, as I theorised in the other thread, shouldn't we be expecting 65nm value/mainstream/performance R6xx parts during the same quarter as R600?

Additionally, if they're all (or at least some) scheduled for near-simultaneous release, would that explain the relative tardiness? Is ATI simply taking advantage of the "lateness" of Vista?

Jawed
Jawed is offline  
Old 16-Oct-2006, 15:22   #5
epicstruggle
Passenger on Serenity
 
Join Date: Jul 2002
Location: Object in Space
Posts: 1,902
Default

Quote:
Originally Posted by Uttar View Post
D3D10
Stupid question time:
Any word whether ATI plans on releasing features beyond D3D10?
__________________
"everyone is entitled to his own opinion, but not his own facts"
epicstruggle is offline  
Old 16-Oct-2006, 16:47   #6
trumphsiao
Member
 
Join Date: Jan 2006
Posts: 285
Default

Sorry I have to state few points I suspision

1. Raw Performance of R600 or G80 would not have more than 1.5 times that of G71

2. Why It's harder to polish Winxp driver for DX10 Hardware ?
trumphsiao is offline  
Old 16-Oct-2006, 17:34   #7
Arun
Unknown.
 
Join Date: Aug 2002
Location: UK
Posts: 4,934
Default

Quote:
Originally Posted by trumphsiao View Post
1. Raw Performance of R600 or G80 would not have more than 1.5 times that of G71
So now you're telling us the G80 3DMark05/06 scores that leaked were shit? Decide yourself!
Quote:
2. Why It's harder to polish Winxp driver for DX10 Hardware ?
It isn't. Drivers of truly new architectures always take more time to get right, that's all. Do you think the R300 and NV40 drivers were perfect overnight? Hint: They weren't. And there hasn't been such a massive architecture transition since.


Uttar
Arun is offline  
Old 16-Oct-2006, 21:24   #8
psurge
Member
 
Join Date: Feb 2002
Location: LA, California
Posts: 854
Default

Uttar - maybe the scores were for g80 SLIed?
psurge is offline  
Old 16-Oct-2006, 22:20   #9
^M^
Junior Member
 
Join Date: Jul 2006
Posts: 47
Default

Quote:
Originally Posted by psurge View Post
Uttar - maybe the scores were for g80 SLIed?
Did the news report a power surge in Taiwan ?
^M^ is offline  
Old 16-Oct-2006, 23:46   #10
pjbliverpool
B3D Scallywag
 
Join Date: May 2005
Location: Guess...
Posts: 5,933
Send a message via MSN to pjbliverpool
Default

I still say 64 Xenos style shaders isn't enough as it would have to run at insane clock speeds just to match R580's raw shader performance.

Plus didn't someone at ATI comment that the next gen would be over half a TFLOP? The chip would have to run at 875Mhz to achieve half a TFLOP

Im placing my money on either 96 Xenos sytle shaders at 600+ Mhz or 64, significantly beefed up shaders, again at 600+ Mhz.

In fact R600 could may well just be double Xenos. 96 ALU's, 32 TMU's, 32 vertex texture units? Then either 24 or 32 ROPS and hopefully that really juicy 512bit memory interface. All running at 600+ Mhz should make quite a nice next gen chip.
pjbliverpool is offline  
Old 17-Oct-2006, 00:04   #11
Skrying
S K R Y I N G
 
Join Date: Jul 2005
Posts: 4,815
Default

Does the R600 really need a 512bit external bus? I would think it'd have to be mighty powerful for all of that bandwidth.
Skrying is offline  
Old 17-Oct-2006, 00:09   #12
Geo
Mostly Harmless
 
Join Date: Apr 2002
Location: Uffda-land
Posts: 9,156
Send a message via MSN to Geo
Default

Hence the circular nature of the discussion. Pick the rumor you believe and work it out to its natural conclusion and you get to different places.

64 units would not seem to require 512-bit gddr4. So one or the other is likely incorrect, unless someone can point at a different way to circle the square. HDR? The R580+gddr4 results would not seem to support that idea. Really eye-popping core speed? 800MHz or higher? Well, NV seems to have done something with G80 shaders to get to 1350MHz, so maybe that possibility shouldn't be entirely dismissed, tho the conventional wisdom doesn't seem to favor it.

Oh yeah, toss some fairly frightening heat/power rumors in there too in trying to adjudicate the rumor mill. . . but that could go either way, either > 64 shaders or 800MHz+ clock. Would the power/heat rumors be consistent with 64 shaders at 650MHz? I tend to think not.
__________________
"We'll thrash them --absolutely thrash them."--Richard Huddy on Larrabee
"Our multi-decade old 3D graphics rendering architecture that's based on a rasterization approach is no longer scalable and suitable for the demands of the future." --Pat Gelsinger, Intel
"Christ, this is Beyond3D; just get rid of any f**ker talking about patterned chihuahuas! Can the dog write GLSL? No. Then it can f**k off." --Da Boss
Geo is offline  
Old 17-Oct-2006, 00:12   #13
Jawed
Regular
 
Join Date: Oct 2004
Location: London
Posts: 9,955
Send a message via Skype™ to Jawed
Default

I really can't understand why 64 pipes has any credibility.

If Orton winked when he said 96 pipes recently, why are people still beating round the bush?

Jawed
Jawed is offline  
Old 17-Oct-2006, 00:16   #14
Geo
Mostly Harmless
 
Join Date: Apr 2002
Location: Uffda-land
Posts: 9,156
Send a message via MSN to Geo
Default

Oh, hang on. Orton winked and said 96? When/where was this?

The 64 comes from xbit claiming that ATI tipped them that way.
__________________
"We'll thrash them --absolutely thrash them."--Richard Huddy on Larrabee
"Our multi-decade old 3D graphics rendering architecture that's based on a rasterization approach is no longer scalable and suitable for the demands of the future." --Pat Gelsinger, Intel
"Christ, this is Beyond3D; just get rid of any f**ker talking about patterned chihuahuas! Can the dog write GLSL? No. Then it can f**k off." --Da Boss
Geo is offline  
Old 17-Oct-2006, 00:19   #15
Jawed
Regular
 
Join Date: Oct 2004
Location: London
Posts: 9,955
Send a message via Skype™ to Jawed
Default

http://www.techreport.com/etc/2006q4...g/index.x?pg=1

Jawed
Jawed is offline  
Old 17-Oct-2006, 00:22   #16
Geo
Mostly Harmless
 
Join Date: Apr 2002
Location: Uffda-land
Posts: 9,156
Send a message via MSN to Geo
Default

Son of a gun.

Quote:
Orton pegged the floating-point power of today's top Radeon GPUs with 48 pixel shader processors at about 375 gigaflops, with 64 GB/s of memory bandwidth. The next generation, he said, could potentially have 96 shader processors and will exceed half a teraflop of computing power.
Missed that. Right. Time perhaps to update the conventional wisdom watch then, even with that "potentially" hedge.
__________________
"We'll thrash them --absolutely thrash them."--Richard Huddy on Larrabee
"Our multi-decade old 3D graphics rendering architecture that's based on a rasterization approach is no longer scalable and suitable for the demands of the future." --Pat Gelsinger, Intel
"Christ, this is Beyond3D; just get rid of any f**ker talking about patterned chihuahuas! Can the dog write GLSL? No. Then it can f**k off." --Da Boss
Geo is offline  
Old 17-Oct-2006, 00:29   #17
Jawed
Regular
 
Join Date: Oct 2004
Location: London
Posts: 9,955
Send a message via Skype™ to Jawed
Default

It's why 512-bit has credibility for me, even though I'm dubious about die size being big enough for such a big bus, particularly with 65nm ~1 year away.

Jawed
Jawed is offline  
Old 17-Oct-2006, 00:31   #18
PeterAce
Member
 
Join Date: Sep 2003
Location: UK, Bedfordshire
Posts: 489
Default

So R500/C1 is 500 * (48 * 9) = 216000

So by that logic R600 might be:

600 * (96 * 9) = 518400

So 600 Mhz and six 16-way shader arrays. Very nice if true.
__________________
PeterAce "Lost in quantisation"

Last edited by PeterAce; 17-Oct-2006 at 00:33.
PeterAce is offline  
Old 17-Oct-2006, 00:33   #19
psurge
Member
 
Join Date: Feb 2002
Location: LA, California
Posts: 854
Default

They can always shrink back to a smaller bus size and over-compensate with insane memory clocks, no?
psurge is offline  
Old 17-Oct-2006, 00:38   #20
Skrying
S K R Y I N G
 
Join Date: Jul 2005
Posts: 4,815
Default

Quote:
Originally Posted by psurge View Post
They can always shrink back to a smaller bus size and over-compensate with insane memory clocks, no?
I wonder how that'd go over. Use a massive bus till you get insane memory clocks. Interesting, but I personally dont think it'd make sense. The marketing would be terrible, the layout for 512bit would be insane......... wait, does sorta sound like ATi.
Skrying is offline  
Old 17-Oct-2006, 00:41   #21
Geo
Mostly Harmless
 
Join Date: Apr 2002
Location: Uffda-land
Posts: 9,156
Send a message via MSN to Geo
Default

Maybe, but your PR people will hate you. They hate getting everyone all ZOMG about something and then explaining later why the next part is still ZOMG when they take it away.

Not impossible, but generally not a happy prospect. And gddr4 is the big bump, so I dunno where you'd suddenly get that extra bw post gddr4. . .
__________________
"We'll thrash them --absolutely thrash them."--Richard Huddy on Larrabee
"Our multi-decade old 3D graphics rendering architecture that's based on a rasterization approach is no longer scalable and suitable for the demands of the future." --Pat Gelsinger, Intel
"Christ, this is Beyond3D; just get rid of any f**ker talking about patterned chihuahuas! Can the dog write GLSL? No. Then it can f**k off." --Da Boss
Geo is offline  
Old 17-Oct-2006, 00:44   #22
Ailuros
Epsilon plus three
 
Join Date: Feb 2002
Location: Chania
Posts: 8,693
Default

Quote:
Originally Posted by psurge View Post
They can always shrink back to a smaller bus size and over-compensate with insane memory clocks, no?
Sounds like a huge waste of R&D resources if I didn't misunderstand you. I'm quite confident that IHVs know the limits for each future architecture right from the start.
__________________
People are more violently opposed to fur than leather; because it's easier to harass rich ladies than motorcycle gangs.
Ailuros is offline  
Old 17-Oct-2006, 00:48   #23
Geo
Mostly Harmless
 
Join Date: Apr 2002
Location: Uffda-land
Posts: 9,156
Send a message via MSN to Geo
Default

Quote:
Originally Posted by Jawed View Post
It's why 512-bit has credibility for me, even though I'm dubious about die size being big enough for such a big bus, particularly with 65nm ~1 year away.
The harder I look at what TR reported Orton saying, the more I notice that he gave current bw, current flops, future flops, and. . .oh, wait, no future bw. But if you extrapolate from what he did give, you'd be lead to believe a number in the 120GB/s range. It's arranged in classic "what's the missing number" fashion almost.

Ha.
__________________
"We'll thrash them --absolutely thrash them."--Richard Huddy on Larrabee
"Our multi-decade old 3D graphics rendering architecture that's based on a rasterization approach is no longer scalable and suitable for the demands of the future." --Pat Gelsinger, Intel
"Christ, this is Beyond3D; just get rid of any f**ker talking about patterned chihuahuas! Can the dog write GLSL? No. Then it can f**k off." --Da Boss
Geo is offline  
Old 17-Oct-2006, 00:49   #24
psurge
Member
 
Join Date: Feb 2002
Location: LA, California
Posts: 854
Default

Hmm I was thinking something like xdr2. There's also some talk of GDDR5 already so who knows (maybe that'll reduce the pin requirements versus GDDR4?). I guess it's more likely for the ultra high-end chips to just keep pushing the die-size envelope ...
psurge is offline  
Old 17-Oct-2006, 01:03   #25
Bouncing Zabaglione Bros.
Regular
 
Join Date: Jun 2003
Posts: 6,359
Default

Quote:
Originally Posted by geo View Post
Oh, hang on. Orton winked and said 96? When/where was this?

The 64 comes from xbit claiming that ATI tipped them that way.
If you were ATI and wanted to blindside an earlier launching G80 by leaking a low number that would have Nvidia thinking they'd got you cold, only to come back with R300 style insanity....?
Bouncing Zabaglione Bros. is offline  

 

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 16:09.


Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.