Jawed
Legend
Rubbish.One is dependent on VLIW and superscalar scheduling limitations, while the other isn't.
Jawed
Rubbish.One is dependent on VLIW and superscalar scheduling limitations, while the other isn't.
You obviously aren't supposed to defend the 'if this is really what you mean, then implicitly you are saying that [...]' claims - the point of those is to make you realize your base claim is false, not that you really mean what is implicited.Scali said:Words are being put into my mouth now, I am not going to defend those.
So yeah, you're either still wrong, or you're backpedaling, or it's simply all a misunderstanding because of the way you phrased your original claim which didn't represent what you actually meant to say. It doesn't really matter, I don't care either way as I don't really see the problem with either of these three scenarios.Scali said:It is, because the same driver codebase will be used for Larrabee
AFAICT, Tim was responding to the claim that the main reason why R6xx isn't faster is that they are 'not being anywhere near as granular as the GeForce is'. So clearly the subject was efficiency; clearly R6xx has less raw power in several respects, but Tim's point (or at least how I understood it) is merely that R6xx's global scheduler etc. are perfectly fine and that branching isn't the main problem - if we are talking about efficiency, the only real disadvantage R6xx has is VLIW. Of course, maybe Tim meant more than that, in which case I disagree with him...Jawed said:Rubbish.
Anyway, if your claim now is that the Larrabee team will merely look at the G3x/G4x codebase and influence themselves from it, then duh, that's quite likely. But what you said clearly was that they would 'use the same codebase'; i.e. the differences is what you should count, not the similarities which would be overwhelming:
So yeah, you're either still wrong, or you're backpedaling, or it's simply all a misunderstanding because of the way you phrased your original claim which didn't represent what you actually meant to say. It doesn't really matter, I don't care either way as I don't really see the problem with either of these three scenarios.
By Scali:
Now you're just twisting the facts.
A 780G scores 1183 3DMarks in 3DMark06 (http://arstechnica.com/reviews/hardw...t-review.ars/3).
My X3100 scores 560 3DMarks in 3DMark06... and part of that is probably because of my modest 1.5 GHz T5250 processor (the IGP in that article is a 3100, incorrectly labeled as a X3100 by the article... notice the huge performance difference... also the fact that that one doesn't even do SM3.0, and this already does SM4.0).
So they're about a factor 2 apart, not nearly a factor 10.
Besides, these are the extremes of the IGP market today... The 780G is the fastest IGP, and the X3100 is the slowest. nVidia is somewhere in between.
If Intel can get 20-40% extra performance out of the X4000 series (which is not that unreasonable considering it gets 10 processing units instead of 8, it will use DDR3, and it is clocked higher), then they'll be more or less in the same ballpark performance-wise.
This rumour includes an Intel-slide claiming about 3x the performance of G33: http://www.fudzilla.com/index.php?op...=3828&Itemid=1
So they should land somewhere in 800-1000 3DMarks... A stone's throw away from the 1183 that the 780G gets.
2:
While the 780G obviously is faster, the difference shouldn't be THAT large (especially since the X3500 should also be at least as fast as the X3000).
I think I know exactly what it is: They tested on Vista, and the other test is on XP.
Why? No offense, but what indications have they given that they're changing? Also if I recall ATi wore that "bad driver" albatross for quite a while after they got their driver act together....We all know what a poor trackrecord Intel has, but that was then and this is now, so let's stop reiterating over that.
Why? No offense, but what indications have they given that they're changing?
Perhaps you missed the driver that I linked to at the start of this topic?
Even if that driver were perfect, one driver does not a turnaround make.Perhaps you missed the driver that I linked to at the start of this topic?
That driver is as poor as all other Intel drivers, given the fact that it doesn't manage to run all titles that are out there(run as in start without crashing, not run with adequate speed or anything). Of course, the G35 I have laying around here could be cursed or something.
The current ones (which are clearly aimed at taking market share from VIA and SiS which also only have single-channel memory controllers), yes, but:Watch out: NV's IGPs for Intel only seem to have single-channel DRAM interfaces....
Seriously though, i just cant believe intel can just get away with selling crap like these for so many years. Sorry for the rant