The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
texturing... maybe?
thinking.gif

Well yes silly, but what sort of texturing? Extreme texturing? Ultra-texturing? Immoral-O-Texture? Or what? :smile:


Q: How much bandwidth?
A: huge lots. Many bandwidths.
Q: why? why so much? why nearly twice as much as their mortal enemies?
A: oh, because it'll be used to feed the TMUs
Q: that's silly. Why would they need twice as much as The Others? Are they silly?
A: No, they're not silly, they need the same much bandwidth per TMU, there's just twice as many of them
Q: That's goods. So they'res not teh silly. Why are there twice as many of them those there TMUs then?
A: Because they do twice as much texturing ... to do ... some things...
Q: What sort of things?
A: Naughty things

neliz said:
Something immoral! pay attention to the discussion.

Q: Will it be naughty things they do with the TMUs? Is it naughty tiem?
A: Oh yes. It's naughty tiem. Definitely.

How much extra texturing is required, and when and where and how? compared to G80 that is... and why twice as much?
 
Well, we know that AMD/ATI's unified architecture is heavily influenced by the XB0X360 graphics chip. And we know that Xbox360 is closely associated with HD DVD, which in turn is the HD format of choice of pr0n. Nvidia, on the other hand, is in the Sony PS3 camp, which is tied to BR, which is "not so much" on the HD pr0n front. What does this mean for R600 TMUs? Well, I think the facts speak for themselves, do they not? ;)

No, really, that's enough about naughty TMU's for awhile. . . .back to feeds & speeds, plz. :cool:
 
I'm half serious Geo. A year out you nailed your flag to the mast re G80 and you were right(-ish), despite it being a total left-fielder for many architecture-wise. So, come on, give us a sig-worthy declaration. Why do they need twice as much? Eh? A(MD|TI) clearly has a left-fielder coming here, but what is it? How is this going to translate in to what's going on in my face that G80 can't deliver?
 
I sure as hell hope that hypothetical ~150GB/sec bandwidth has something better to show than 2x or 3x times the performance in 2560*1600 with say 8x multisampling, where the winner will after all lack in playability.

As for something being immoral here, it's most likely the neverending debate about inches :D
 
Insiders suggest currently the R600 in the lab is ~50% faster than a X1950XTX in some popular DX9 titles, when 4xAA is applied.

wouldnt this make it slower than g80? hard to believe it could be slower with all these ridiculous power numbers and board size rumors

btw anyone else find it weird that we can travel to different planets but we have problems developing gpus.
 
I'm half serious Geo. A year out you nailed your flag to the mast re G80 and you were right(-ish), despite it being a total left-fielder for many architecture-wise. So, come on, give us a sig-worthy declaration. Why do they need twice as much? Eh? A(MD|TI) clearly has a left-fielder coming here, but what is it? How is this going to translate in to what's going on in my face that G80 can't deliver?

Yeah, well I don't make 'em up out of whole cloth either. There's usually a few hints here and there to correlate. I've really not heard anything on what all that bw is going to go for.

We do know that NV came up with CSAA, in part, to avoid the BW penalty of hi-res hi-aa scenarios. Maybe AMD is going for more old fashioned msaa of 12x or somesuch without bw saving techniques like CSAA associated. I don't think it's credible to suspect they left AA alone, or even bumped to 8x and called it a day. Could they have done something even wilder on the AA front that would chew up serious amounts of BW? Dunno --we'd need a deeper theorist than me to throw out some possibilities. HDR+AA is another obvious BW hog to be looking at. Some people seem to like the gpgpu possibilities.

But, sure, I'm definitely of the opinion that 512-bit is not something you do for a checkbox. That's crazy talk there, in my book.
 
Yeah, well I don't make 'em up out of whole cloth either. There's usually a few hints here and there to correlate. I've really not heard anything on what all that bw is going to go for.

We do know that NV came up with CSAA, in part, to avoid the BW penalty of hi-res hi-aa scenarios. Maybe AMD is going for more old fashioned msaa of 12x or somesuch without bw saving techniques like CSAA associated. I don't think it's credible to suspect they left AA alone, or even bumped to 8x and called it a day.

On G80 you'd actually put a 512bit bus to good use if you'd want extreme resolutions with 8x MSAA or even worse 16xQ. For the other two hybrid MS/CSAA modes you'd hardly need extravagant memory footprint or bandwidth.

I guess it's safe to assume that ATI also went for single cycle 4xMSAA, it then comes down how many cycles for AA would actually make sense and not for bandwidth alone. What's so horrendous about 8x sparse MSAA anyway? And it's not like marketing won't find ways to downplay coverage sampling as useless.

Could they have done something even wilder on the AA front that would chew up serious amounts of BW? Dunno --we'd need a deeper theorist than me to throw out some possibilities. HDR+AA is another obvious BW hog to be looking at. Some people seem to like the gpgpu possibilities.

I'd exclude 16xMSAA. For one it sounds like a too large penalty for now and as a close second I recall a M$ presentation mentioning that "some" future D3D10 GPUs will go up to 16xAA.
 
Some people seem to like the gpgpu possibilities.

this is ludicrous-speak isn't it? there's no mass market for gpgpu, unless that's going to come out of left-field too and ADMATI are going to announce gpgpu/physics support which hasn't leaked yet. gpgpu is a niche, it makes zero sense for AMD(ATI) to make such a major design decision (which I'm presuming will cost them major money) to satisfy a market which doesn't exist. What's the lifetime of R600? Six months? Three months, now that it's three months late? Where are the killer-app gpgpu apps coming from in the next three/six months? Planet Mars? gpgpu sounds like a made up stuff explanation. It's like coming up with Itanium2 as a HPC/desktop/mobile-one-size-fits-all processor, isn't it?

To me it's looking more and more like they've either screwed the pooch, they've got some IQ enhancement that no-one has thought of (which had better be damned good), or they're shooting to win the IQ crown at stupid resolutions that 1% of the population play at.

But, sure, I'm definitely of the opinion that 512-bit is not something you do for a checkbox. That's crazy talk there, in my book.
Well in that I think you are 100% correct. In interesting times, we live.
 
I guess it's safe to assume that ATI also went for single cycle 4xMSAA, it then comes down how many cycles for AA would actually make sense and not for bandwidth alone. What's so horrendous about 8x sparse MSAA anyway? And it's not like marketing won't find ways to downplay coverage sampling as useless.

Well, as I recall saying somewhere or other recently (ahem!), they've had 3 loop * 2x AA since 9700Pro. . .which had 128MB of RAM and ~20 GB/s of bw. R600 looks likely to have on the order of 8x more of each! They're going to bump to 8x msaa (an increase of 33% over that venerable 9700 Pro) and quit? I'm not buying that right now. The obvious answer would be a bump to 3 * single-cycle 4x for 12x msaa. But possibly that's too obvious and they have something else up their sleeves.
 
gpgpu is a niche
It's a niche, but a potentially very lucrative niche as a standalone market. I know we tend to think of these things as consumers, but there's more to it than that. For what's it worth (probably not a lot), here's a quote from Jen-Hsun Huang, Q3 2006 CC - I think it's fairly clearly they are considering GPGPU as a non-negligible revenue stream. As for Quadro Plex, think of what G80 does in terms of output (NVIO!) and what it means for scalability there...

It's fairly clear that with G80, GPGPU was a very real consideration, and so was imaging. Neither provides any real advantage for G80, but these architectural decisions will likely provide them with high-margin revenue streams in the hundreds of millions of dollars - if they manage to capture that market share, of course.

http://seekingalpha.com/article/20294
Jen-Hsun Huang said:
Our expectation is that it is going to be a very large business. I don't know exactly how large it is going to be yet. But I agree with you. It is going to be very chunky business. It should be a very significant business. The number of installation of image generators in the world, just thinking through that, large workstation companies use to -- multibillion dollar workstation companies use to serve that market. And so we're going to replace those aging image generators. We're going to create new categories of desk-sized super workstations. And then in combination with GPU computing, I think it is a whole new computing model that we are excited about. It is hard for us to guess exactly what it is right now, but I think it is going to be very large, Mark.


Uttar
P.S.: The point I'm willing to illustrate is that, for a variety of reasons, keeping GPGPU in mind when designing a DX10-level architecture is a Good Thing, and for a lot of reasons. I don't think designing a BOARD (->bandwidth) around it makes sense YET, though, unless aimed at the professional market.
 
wouldnt this make it slower than g80? hard to believe it could be slower with all these ridiculous power numbers and board size rumors

btw anyone else find it weird that we can travel to different planets but we have problems developing gpus.

Ahem..

Excuse me.. what other planets did WE travel to .. besides in our dreams..??

The harsh reality is that probably none of the current board members will be alive when someone actually sets foot on any other planet...
 
Ahem..

Excuse me.. what other planets did WE travel to .. besides in our dreams..??

The harsh reality is that probably none of the current board members will be alive when someone actually sets foot on any other planet...


Even the Moon is in doubt :LOL:
 
Ahem..

Excuse me.. what other planets did WE travel to .. besides in our dreams..??

The harsh reality is that probably none of the current board members will be alive when someone actually sets foot on any other planet...

should have been clearer, i just mean send rockets to other planets.
 
Well, I think it's worth noting R580 was really held back by texturing. If r600 has 32 TMU's that's double. Plus add a little more for some more clock.

So I think something like 100%+ a little bit is as much faster as R600 could hope to be in general, than R580.

Which means it'll be solidly with G80 and all, but it's unlikely to be a performance stunner.
 
Status
Not open for further replies.
Back
Top