jvd said:Can't we get an adaptive soultion .
LIke 12x fsaa that will reduce down all the way to 2x depending on user set frame rates (if a user allways wants to be above 60 or 80 fps they can select that ) and the amount of samples each frame needs ?
MuFu said:Bundled with Gabe Newell - "I like pie whilst I frag" - t-shirt
so it looks good. I'm thinking more pipelines rather than just a core boost, though. Rialto bridge for AGP systems for a part to debut Q2 05? I REALLY doubt it. REALLY REALLY doubt it. maybe for mid-end and below, okay, but high end? no. core clock seems high, especially if pipelines are increased. even if it is 600-700, though, it still seems high. FP32 and SM3.0 will probably bring a hefty increase in transistor count that .09 won't be able to overcome completely, so unless we're in for some dual-slot cooling or something like that, power consumption and cooling would probably prevent clock speeds from going over 625 or so.0.09u (low-k) @TSMC
Taped out about two months ago so first silicon probably back early this month
SM3.0
FP32
16P/8V
Shader core still based on R3x0/R4x0
Native PCIe and Rialto-bridged AGP versions
Max core clock in the region of 600-700MHz
Die size 250-300million transistors
256-bit, GDDR3/4-compatible memory interface
256-512MB memory at launch (GDDR3@1.2-1.4GHz?)
Possibly higher MSAA modes (8x?)
Multiple GPU capability
Bundled with Gabe Newell - "I like pie whilst I frag" - t-shirt
care to share the nature of this change...? GDDR3.5?DaveBaumann said:50/50 there may be a slight alteration to GDDR3 before GDDR4 comes out.
Wasn't the whole point of GDDR3 to bring DDR2 (high speeds) and DDR (low latencies) together?The Baron said:I hope we're not looking at a voltage bump... also, this might just be me getting confused and all, but doesn't GDDR3 have slightly looser timings than DDR?
Was it? I'm not sure. I thought the main reason was to lower voltages to (IIRC) 1.6v instead of 2.5v because DDR2 didn't scale as well as was expected...Kaotik said:Wasn't the whole point of GDDR3 to bring DDR2 (high speeds) and DDR (low latencies) together?The Baron said:I hope we're not looking at a voltage bump... also, this might just be me getting confused and all, but doesn't GDDR3 have slightly looser timings than DDR?
The Baron said:Was it? I'm not sure. I thought the main reason was to lower voltages to (IIRC) 1.6v instead of 2.5v because DDR2 didn't scale as well as was expected...Kaotik said:Wasn't the whole point of GDDR3 to bring DDR2 (high speeds) and DDR (low latencies) together?The Baron said:I hope we're not looking at a voltage bump... also, this might just be me getting confused and all, but doesn't GDDR3 have slightly looser timings than DDR?
I'm thinking more pipelines rather than just a core boost, though.
The Baron said:no. core clock seems high, especially if pipelines are increased. even if it is 600-700, though, it still seems high. FP32 and SM3.0 will probably bring a hefty increase in transistor count that .09 won't be able to overcome completely, so unless we're in for some dual-slot cooling or something like that, power consumption and cooling would probably prevent clock speeds from going over 625 or so.
Mortimer said:I just had this crazy(?) idea last night. Could they make the chip to have 16VS&16PS pipes and make it so that every VS is connected to a single PS pipe?
It would be a step towards unified shader style design in a way I think. Any way this kind of design would work? (It would be massively unbalanced towards polygon power, but that's beside the point)
Seems to me that they can't have (much) more PS pipes going from 24bit->32bit and 2.0 -> 3.0 shaders, but they could allocate some more transistors to VS side of the chip.
geo said:What about "advanced memory interface" and kaleidoscope (altho there has been at least one hint that might turn out to be 'aka' rather than 'and').
MuFu said:geo said:What about "advanced memory interface" and kaleidoscope (altho there has been at least one hint that might turn out to be 'aka' rather than 'and').
I think Kaleidoscope is something to do with display management or output QC.
As for the interface, there have been a few not-so-subtle hints that there might be some big internal changes (topological?). Lots of clues about if you want to take this a step further.