Grid 2 has exclusive Haswell GPU features

ps: nick -- thanks for turning up but fonts size fer christ sake ;D
pps: you make tress fx sound very rosy but it was a big hit on fps and thats with only a single character supporting (a quick google I saw between 30 and 50%) it plus sometimes the hair would clip through her shoulder blades and make her look like she had hairy armpits :D
Sorry about the font in my first post. I used Word to compose my answer and didn't realize the copy/paste would keep the (large) font.
The initial release of TressFX had problems, clipping was one of them. A game update improved this particular issue considerably. In general, there is a lot more Crystal and AMD would have loved to add to TressFX for this game but we ran out of time (this is mentioned in Jason Lacroix' GDC presentation). One example is the use a separate hair shadow map for shadow casting onto Lara's face.

But then in other parts of the interview, you basically imply that AMD APUs are somehow in a different performance category; for instance: "when it comes to high-end gaming, current Intel integrated graphics solutions usually force users to compromise between quality or performance, which is a tough choice to impose on gamers." The same statement can be made for all integrated graphics... it's sort of obvious - they run in a fraction of the power and thermal budget of the discrete GPUs.
So you're saying that Intel's solutions are not fast enough to be usable, but yet Iris Pro is faster than any of the current APUs that AMD have released as far as I can tell from reviews. So does that mean AMD APUs are magically more usable just because they say AMD on them? Invoking the PS4 is "misleading" since AMD ships nothing of that class of part for PCs currently; the current parts aren't even GCN-based yet, so the comparison is pretty far-fetch IMHO.
I think we have already reached the line beyond which public discussions with a competitor on the merits of our respective products become unhealthy. So I will not comment on this.

I don't doubt for a second it can be made safe, what I doubt is that it can be made safe on any past, present and future DX11+ GPU. At that point it just becomes a proprietary extension but I thought you were arguing developers don't like that :)
I haven't mentioned an extension so far. It's too early to talk about this.
 
Then again, when buying midrange stuff, you certainly do not get Iris Pro.
Sure, but like I said we're just talking tech here, not pricing. I did say the latter is obviously relevant to consumers, but not to an architectural discussion.

I also seriously doubt Intel's IQ is the same as AMD's or NVIDIA's,they have to be cutting corners somewhere .
This is just out-dated thinking... "image quality" concerns were mostly something for the DX9 era where the APIs were very loose in their requirements. These days (DX10+ GPUs) the only significant degree of freedom is in anisotropic filtering and LOD computation. As nAo already mentioned, I think Intel's is currently the best as of Ivy Bridge, although all three are really very good at this point. Several of the Haswell reviews had image quality comparisons and none that I saw found any noticeable differences between the modern implementations, as expected.

Of course the way you phrased that makes me think that you just have a preconceived notion that Intel graphics has to be worse "somehow", and are just looking for ways to support that...

I think we have already reached the line beyond which public discussions with a competitor on the merits of our respective products become unhealthy. So I will not comment on this.
Yeah that's cool, I just wanted to explain what I meant about my "misleading" comment, since you replied to that. As I stated twice, I realize you have to play "the marketing game" when speaking to the media anyways.

I haven't mentioned an extension so far. It's too early to talk about this.
Well I'm interested to hear whatever comes of it :) That said, I still don't think there's any way that you can retroactively change the DX spec and make those semantics safe to ship on "any DX11 card", so it effectively is an extension, regardless of how it might be implemented.
 
This is just out-dated thinking... "image quality" concerns were mostly something for the DX9 era where the APIs were very loose in their requirements. These days (DX10+ GPUs) the only significant degree of freedom is in anisotropic filtering and LOD computation. As nAo already mentioned, I think Intel's is currently the best as of Ivy Bridge, although all three are really very good at this point. Several of the Haswell reviews had image quality comparisons and none that I saw found any noticeable differences between the modern implementations, as expected.
For example, Anand mentioned some form of texture flashing in BF3 (a major game). Bugs and lacky game profiles are something to be expected from Intel solutions, which will also directly affect image quality.

Of course the way you phrased that makes me think that you just have a preconceived notion that Intel graphics has to be worse "somehow", and are just looking for ways to support that...
Maybe, I just have a hard time agreeing on the idea that the recent and young Intel offerings could be better than the old and gold solutions from NVIDIA and AMD , the difference in experience and support is vast.
 
For example, Anand mentioned some form of texture flashing in BF3 (a major game). Bugs and lacky game profiles are something to be expected from Intel solutions, which will also directly affect image quality.
They're just as expected from AMD and Nvidia solutions. Bugs happen.
Maybe, I just have a hard time agreeing on the idea that the recent and young Intel offerings could be better than the old and gold solutions from NVIDIA and AMD , the difference in experience and support is vast.
There's a difference, but it's not even remotely as pronounced as it used to be.
 
There's a difference between driver maturity and image quality. Texture flashing is hopefully a driver bug. Texture filtering is pure hardware. Of course, it's pointless to have the best image quality if you're rendering the wrong triangles. ;)
 
For example, Anand mentioned some form of texture flashing in BF3 (a major game). Bugs and lacky game profiles are something to be expected from Intel solutions, which will also directly affect image quality.


That's a minor driver or game issue and has nothing to do with image quality on a hardware level. Such issues exist on AMD and Nvidia too. You have to read some driver threads and you will find similar issues.
 
I think we have already reached the line beyond which public discussions with a competitor on the merits of our respective products become unhealthy. So I will not comment on this.
Well then it looks like the only way to resolve this is
wait for it....

IHV Deathmatch
Nick, Andy you both have Quake 3 right
 
Care to explain why Intel must be cutting corners? For instance IVB and HSW have one of the best anisotropic filtering implementations (if not the best).

Not that it's relevant to the topic, but Intel's engineers should be publicly lynched if nowadays they wouldn't deliver high quality AF considering the abominations that used to be found in Intel's past integrated GPU solutions years ago.

The real question here would be why Intel had to be forced to care indirectly by increasing competition about such details in the first place, where from some point and onwards these "details" should be self explanatory and not try to sell a negative LOD slider as any form of AF for example.
 
The real question here would be why Intel had to be forced to care indirectly by increasing competition about such details in the first place, where from some point and onwards these "details" should be self explanatory and not try to sell a negative LOD slider as any form of AF for example.
You're applying a double-standard here. Everyone's AF implementations used to be terrible, Intel just started to care about graphics the most recently of the three major IHVs.

Do you understand how the math for LOD calculations works? It's obvious how all vendors used shortcuts in the past to avoid expensive stuff like square roots. There's no LOD "bias" going on, it's just that by necessity when you have poor approximations they must be on the side of blurrier LODs, else you expose aliasing and often violate the spec range of acceptable LOD computation. There's no conspiracy/trickery; in the past everyone used poor approximations to this LOD computation from gradients, and now that has been improved.
 
You're applying a double-standard here.

Am I? Let's see.

Everyone's AF implementations used to be terrible, Intel just started to care about graphics the most recently of the three major IHVs.

Diplomatically phrased no doubt. Intel has had obviously for the longest time horrible AF exactly because they started to care just recently about graphics. Else starting with IVB we got AF that is approximately on the level what the competition had more or less by default over probably more than half a decade prior to that.

Do you understand how the math for LOD calculations works? It's obvious how all vendors used shortcuts in the past to avoid expensive stuff like square roots. There's no LOD "bias" going on, it's just that by necessity when you have poor approximations they must be on the side of blurrier LODs, else you expose aliasing and often violate the spec range of acceptable LOD computation. There's no conspiracy/trickery; in the past everyone used poor approximations to this LOD computation from gradients, and now that has been improved.

No I'm a simple layman and I can't write a single line of code; if that's good enough for you. However my eyes as a user are trained well enough to be able to keep apart a negative LOD offset (which can be verified with AF testing applications in any case) from real anisotropic filtering or any optimisation for the latter.

In the meantime again kudos for getting on par with the competition, but I beg your pardon Intel or anyone associated to it has also to take the criticism that it took TOO LONG to get as far. Better late than never for sure.
 
The real question here would be why Intel had to be forced to care indirectly by increasing competition about such details in the first place, where from some point and onwards these "details" should be self explanatory and not try to sell a negative LOD slider as any form of AF for example.
There is no forcing, there are just priorities, which I am sure you noticed have quite changed in recent times. Anisotropic filtering is not exactly rocket science, it actually takes more effort to do it "wrong" (but cheap) than to do it properly, if you are willing to pay the area/perf cost for it.
 
Else starting with IVB we got AF that is approximately on the level what the competition had more or less by default over probably more than half a decade prior to that.


Half decade? Prior to GCN AMD had horrible shimmering and banding issues. In fact this issue is still present on their APUs (edit: Kabini and Temash with GCN of course are fixed). Banding is fixed with VLIW4 but shimmering is even worse. Intel never had shimmering issues at least. Nvidia learned their lesson since G80 indeed. Intels AF implementation is top notch nowadays, however in directx games it can't be enforced through the driver. So without AF application support Intel users won't get AF. I reported this numerous times to them but it seems they don't care unfortunately.
 
but I beg your pardon Intel or anyone associated to it has also to take the criticism that it took TOO LONG to get as far. Better late than never for sure.
Sure, no doubt, I'm not defending the bad quality in the past (I wasn't even at Intel then), and I doubt anyone else would either... at that time integrated graphics was just what you slapped on the spare space on the die. I'll also note that everyone still does very "brilinear"-like optimizations, so if you're going to complain about AF quality, you really should be quite mad about that.

But what I meant by double standard is you're just picking one thing to be mad about (retroactively, since it's not longer an issue). To bring it back onto the topic of this thread, why not similarly criticize IHVs for not providing programmable blending for 15+ years? Seems like Haswell and some mobile parts are the only ones that have workable solutions there today. Arguably that's even more important than LOD computation accuracy.
 
When all the rest was horrible, who cares how good the AF quality was? Hardly anyone played at high quality settings anyway.

First fix the important stuff, then improve the rest.
 
Diplomatically phrased no doubt. Intel has had obviously for the longest time horrible AF exactly because they started to care just recently about graphics. Else starting with IVB we got AF that is approximately on the level what the competition had more or less by default over probably more than half a decade prior to that.
I think that's not exactly fair. I agree that intel obviously didn't care too much about graphics quality until recently. But then they did it right. No slow crouching to improve on some mediocre approximations but just a jump to the optimum and the "textbook AF" if you want to name it that way. Currently, the are really providing a top notch solution for AF. I think it's not justified to argue with the shortcomings of their older graphics.
Half decade? Prior to GCN AMD had horrible shimmering and banding issues.
While I agree on the shimmering issue (it was apparently a bug in the interpolation algorithm for the samples, they didn't use fewer samples, but averaged them with wrong weights), overall it was still orders of magnitudes better than intels earlier AF solutions.

But that is all the past. Their (AMD, intel, nV) latest solutions are quite on the same (close to perfect) level, at least for DX10/11. On DX 9, an application can request bilinear AF where GCN still appears to have some issues. That's why I think Andrew's assertion of intel providing the probably best AF solution on the market is likely true. While nV doesn't show the issues with bi AF as GCN, they still have some slight angle dependency in their AF (but that is practically really, really close to indistinguishable). But technically it isn't as perfect as intel's.
 
While I agree on the shimmering issue (it was apparently a bug in the interpolation algorithm for the samples, they didn't use fewer samples, but averaged them with wrong weights), overall it was still orders of magnitudes better than intels earlier AF solutions.

What better? AMD had or still has horrible shimmering and banding issues, Intel didn't.
 
From the reviews it looks like Haswell GT3e offers a good performance/watt GPU in laptops. But for the cost of a GT3e chip, you could get a 'good enough' Haswell CPU plus faster dedicated graphics from NVIDIA or AMD. Realizing that, Iris Pro becomes much less appealing.

In fact, after checking all the vendors I know of, I can't figure out how to buy a laptop with GT3e graphics right now. My google skills have failed me :(
 
What better? AMD had or still has horrible shimmering and banding issues, Intel didn't.
You've never seen it pre IvyBridge? Andrew admitted that it "used to be terrible" and that he doubts anybody would defend the quality of the past. In fact, on the "quality" setting (supposedly the best) it created much worse shimmering than AMD ever did (it basically looked like applying a -1 LOD bias combined with subpar, highly angle dependent filtering). It was clearly the worst solution at that time.
There is no need to defend it today as intel did their homework in that field.
 
Back
Top