The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
i dont know but i want one really bad now that my x1800 is destroyed. I could give a damn if its blurry just as long as it works.

By the way the length of that PCB looks to be about 32cm. Good to know we've gotten to the point where if someone breaks into your house you can tear that card out of your pc and decapitate the intruder with all the ease of fine japanese steel.
 
I somehow doubt they care much about that. Furthermore, I doubt they're really eyeing replacing the current line-up; rather complement it with an higher-end part, and cost-reduce the others through the shrink.

Well the 320MB GTS will probably be as fast as the 640MB version so there's definitely going to be some shuffling at the high-end but when do you think Nvidia will make a move?

I do not believe the drivers are mature enough for us to judge the shader core's performance properly in DX10 yet

Hmmm have you come across something in particular that makes you say that or is that just an assumption based on the current state of Vista drivers?
 
Driver maturity is their own look-out. The more important point is there is very little to compare with (app-wise), and nothing serious to compare against. Can't compare DX10 performance against G71, X1950, etc.
 
Driver maturity is their own look-out. The more important point is there is very little to compare with (app-wise), and nothing serious to compare against. Can't compare DX10 performance against G71, X1950, etc.
You can look at various throughputs throughputs through DX10 and assess them in relation to theoreticals.
 
I want to focus on this statement!

R600 with some software tweaks will actually be faster than we all anticipated.

WTF does this really mean? Why would anyone word it like this?

What software tweaks? If there were no tweaks would it only run as fast as we all anticipated?

Not a software revision? ..or fixes or adjustments? Software tweaks?! Sounds like patch workery. (Is that a word? :D)

Any ideas?
 
You can look at various throughputs throughputs through DX10 and assess them in relation to theoreticals.

Using what? Most of the synthetics we use in performance testing reviews are tried, tested, and well understood in the community. While Rys and Uttar have shown (and will show again, I'm sure) that they have no fear of writing their own code for testing, if you are both developing code and trying to test with it, the single-point-of-reference problem (as it exists today for DX10) is fairly substantial, it seems to me. If you get crappy results vs theoretical, are you really justified in pointing at the vendor's driver team if you can't validate that you'd get better results on another card?

Maybe I'm missing something --does MS offer some benchmarking type applets for DX10?
 
For instance, take the Geometry Shader - you know how many instructions you can put through theoretically, and you can also know how many instructions you can feasibly get through VS and PS scenarios. You can test to see what they are like under the GS and how various aspects of GS utilisation (such as amplification) effects the throughput.
 
Ok but I thought we would have had at least something to demonstrate the efficiency improvements of DX10 over 9.

I understand where you're coming from, because I've been hearing that song for the last year too. But now that we're here, I'm hearing something slightly different. Or maybe a little tangential to what I thought I understood. So let me ask it this way --if a given piece of code is DX9 or DX10, then how do you show performance of DX9 vs DX10 without an app that has two different code paths opimized for each performing the same workload?

What I'm hearing now, is that that was very much an API-specific point, and that DX9 code is going to run a bit slower on Vista than XP, because of WDDM moving the driver from kernal to user space. So typically expect to see a bit slower Vista gaming performance where you actually can compare apples to apples (DX9 performance), and I still don't know how you can compare DX9 to DX10 right now.
 
Well, you'd have your test rigs all dual boot XP and Vista.

And therefore what? My understanding is there are three code paths in play here: DX9 on XP, DX9 on Vista, and DX10 on Vista. Dual booting OSes allows you to compare DX9 on XP vs DX9 on Vista. . . and nothing else.
 
I don't see what's the big deal with comparing DX9 on XP with DX10 on Vista. So what if the OS change impacts the results? Better yet, run all three scenarios and base your conclusions on those results.
 
Runing DX9 code on Vista is not testing DX10. Of course we'll do that, but it won't be testing DX10. Please point at the piece of code you want us to run on Vista as indicative of "DX10 performance". Unless I'm mistaken, you're going to say "write your own!", in which case we're back to "and test it's the card/drivers that are broke rather than our code, how?"
 
I don't see what's the big deal with comparing DX9 on XP with DX10 on Vista. So what if the OS change impacts the results? Better yet, run all three scenarios and base your conclusions on those results.

Geo's problem is the inclusion of a lot of variables (switching OS) will not give you a clear picture. As it looks now, there will be no gaming benchmark either that will show the same path both in dx9 and 10.
In the end we are left with a stick to throw after the reasons why a certain code path would be faster/slower.
All we have is Billy's blue eyes that tell us it's so.
 
Just like DX9 before it, DX10 will be something much talked about right now, but only really useful in a year or so.

I wouldn't bother with synthetic benchmarks on "presumed Direct3D 10 software behaviour in the future" just yet.
DX9 is still alive and kicking.
 
Runing DX9 code on Vista is not testing DX10. Of course we'll do that, but it won't be testing DX10. Please point at the piece of code you want us to run on Vista as indicative of "DX10 performance". Unless I'm mistaken, you're going to say "write your own!", in which case we're back to "and test it's the card/drivers that are broke rather than our code, how?"

Yep, that's exactly what I'm gonna say - but I guess you've already addressed that based on Dave's suggestion :)

Geo's problem is the inclusion of a lot of variables (switching OS) will not give you a clear picture. As it looks now, there will be no gaming benchmark either that will show the same path both in dx9 and 10.
In the end we are left with a stick to throw after the reasons why a certain code path would be faster/slower.
All we have is Billy's blue eyes that tell us it's so.

Yeah I understand that but it's not like we're trying to cure cancer. If DX9+XP is significantly slower than Vista+DX10 on the same task then we can obviously draw the conclusion that Microsoft's promises hold some merit. I guess we can always wait for Andy's DX10 version of SAVSM!
 
Yep, that's exactly what I'm gonna say - but I guess you've already addressed that based on Dave's suggestion :)

Those s/e/l/f/i/s/h b/a/s/t/a/r/d/s/ jolly fellows at AMD don't have a single-point-of-reference problem with their DX10 test code, now do they? :cool:

I'd like to see it myself, of course, but it's up to the code monkeys on the team if its practical or not right now.
 
I want to focus on this statement!



WTF does this really mean? Why would anyone word it like this?

What software tweaks? If there were no tweaks would it only run as fast as we all anticipated?

Not a software revision? ..or fixes or adjustments? Software tweaks?! Sounds like patch workery. (Is that a word? :D)

Any ideas?

Faud is just talking bollocks. English is not his first language so he sometimes phrases things very strangely, or he's just plain old overheard the wrong thing, put two and two together and got six and a half. Happens all the time at the Inq and should not be ascribed any sort of significance.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top