Grid 2 has exclusive Haswell GPU features

High settings, 1080p, but yeah.
I'm sure they're just counting on that old belief (which used to be quite accurate) that intel graphics suck donkey schlong, go easy on 'em. :) I personally am quite pumped about haswell, I'm really really looking forward to the GT3e versions, although who can say how good intel drivers will really be - this has always been their weakest point, even more so than the traditionally pedestrian hardware... Maybe they've turned their ship around - I hope so.

Like pchen said, it requires special hardware features (pixel shader ordering).
Any idea if this feature will turn up in NV/AMD hardware in the future? In any case, I thought MS disallowed extensions to DX these days, to avoid the debacle with all the different flavors of DX9 we had to suffer with.

though I do question why they changed from LGA 1156 to LGA 1155... Surely the benefits couldn't have been THAT great?
It's because of the integrated voltage regulators I wager. There's an intermediary step on the mobo that brings volts down to ~2.5V or somesuch - way higher than older chips - and then the on-chip VRMs take it down further from there. Seems unlikely this could be done and still maintain compatibility with older processors.

but AMD's been keeping stuff backwards and forwards compatible for a while now.
...Because they lack the engineering resources and market oomph to do anything else, no doubt. If they had been in a position to force their customers to upgrade on an intel-like schedule I'm sure they would have loved to do it - it'd mean more profits for them. Instead their bus/mobo tech has been more or less at a standstill for all this time, although it's not as bad as when they were stuck with crappy ole socket7.

That said, I'm never gonna be all that happy about vendor specific features.
It fairly much sucks overall, it fragments the market and annoys all those who don't have the feature when devs spend time and effort on something that only impacts a (sometimes small) sub-set of the market, which could instead have been spent on something that benefitted everyone...

Damn those first-world problems, eh! If this is the biggest worry of our lives we could all count our blessings! ;)
 
From what we've seen in one of the well hidden readme-files of Grid 2, those two effects are not only Haswell-IGP exclusiv, but specific to HD 5200 (Iris).
http://www.pcgameshardware.de/Race-...Exklusive-Grafikeffekte-fuer-Haswell-1071513/

In my books, the limitation of those looks like a conscious decision, not an architectural necessity as it would be possible on all HSW IGPs otherwise.

Now, if this is not an artifact from an early build in the readme, then I imagine it is going to be hard to market this and explain it to "normal" Haswell-users like on the Destkop.
 
although who can say how good intel drivers will really be - this has always been their weakest point, even more so than the traditionally pedestrian hardware... Maybe they've turned their ship around - I hope so.
Have you tried the 15.31 drivers? They were pretty much a rewrite of the UMD. I've been using them for a while and they have been quite robust and stable for me; comparable to NVIDIA/AMD in DirectX. Haven't used much OpenGL recently so not sure about that side.

Any idea if this feature will turn up in NV/AMD hardware in the future? In any case, I thought MS disallowed extensions to DX these days, to avoid the debacle with all the different flavors of DX9 we had to suffer with.
Don't know about competitive plans, but it's a bit harder to fit into their ROP design than Intel's, so it's not something they can just trivially add in with similar overhead. Microsoft does "disallow" extensions officially, but there are always ways around it and everyone does it (see NVAPI and AMD's equivalents).

It fairly much sucks overall, it fragments the market and annoys all those who don't have the feature when devs spend time and effort on something that only impacts a (sometimes small) sub-set of the market, which could instead have been spent on something that benefitted everyone...
Yup, but sometimes the industry needs to get pushed forward... how many years have people been asking for similar capabilities (OIT, volumetric shadows with fixed storage)? 5? 10? Certainly as long as I've been doing graphics.

From what we've seen in one of the well hidden readme-files of Grid 2, those two effects are not only Haswell-IGP exclusiv, but specific to HD 5200 (Iris).
I would be surprised if that is the case, but I don't know for sure. If it was done that way then it would just be a performance consideration (similar to why they didn't include the very slow DX11 fallback path) since these techniques do benefit greatly from the on-chip eDRAM. That said, I always prefer to have the these things be allowed everywhere, even if they are realistically too slow on certain platforms... different users have different performance tolerances.

That said, I wouldn't necessarily trust the readme... I'd check in the game when Haswell comes out.
 
Have you tried the 15.31 drivers?
I wish I could say I have; I own a 2011 sandybridge macbook which runs OSX exclusively, and it seems Apple updates its drivers whenever the hell they feel like - which probably is never, or maybe only when they release a new yearly update of OSX. I can safely say that in the more than two years I've owned this laptop I've never ever seen a single graphics driver update listed in the update thingy.

Also, unsure if Apple uses Intel reference driver source code as a base, or if they cook their own IGP driver entirely from scratch. I assume they use AMD/NV source for discrete graphics though...

I've been using them for a while and they have been quite robust and stable for me; comparable to NVIDIA/AMD in DirectX.
It's very hopeful-inspirational to hear that. :) Seems Intel is finally REALLY taking graphics seriously, despite having said it for years but mostly merely paid lip-service even as recently as sandy CPUs.

Don't know about competitive plans, but it's a bit harder to fit into their ROP design than Intel's, so it's not something they can just trivially add in with similar overhead.
Oh, well maybe sometime in the undefined (far) future then, unless this tech is protected by patent special sauce...

Thanks for your informative post, btw. :)
 
I hope both AMD/NVIDIA can accelerate this feature at least in software , top cards from both companies have very large performance to spare in this game , it wouldn't matter if they lost a few FPS over such feature.
 
Just a heads up
Grid 2 supports avx, and avx exclusive features seem to be advanced blending and smoke shadows
I am a bit late to the party (must check this forum more often ;)) but the advanced rendering features in Grid2 are not based on AVX, they are based on a new DX extension that makes possible to order certain memory accesses from a pixel shader in a well defined and (very important) deterministic manner.

This extension enables a ton of new algorithms that were pretty much impossible or cumbersome/inefficient to implement in vanilla DX11. A classic example are the various order-independent transparency methods possible in DX11 that pretty much all require an unbounded amount of memory and multiple passes (unless you like to see part of your polygons suddenly disappear from the screen because that 400MBytes buffer you set aside just got full).

On HSW we can instead implement much more efficient OIT algorithms, for instance you won't get aliased trees in Grid2 since trees are rendered without alpha-testing and as semi-transparent objects instead, all in one pass and using a fixed/small amount of memory. A similar (just a tad more complex) technique is also used to render deep shadow maps from the light point-of-view, which in turn are used to cast volumetric shadows onto transparent and opaque objects. Other possible applications are user-programmable blending operations and more. It pretty much makes it possible to build your per-pixel data structures without being constrained by standard data formats and ROP & atomic operations.

Can't wait to see what clever developers will do with it :cool:
 
Seems like you've got a lot of insight into the new Haswell stuff nAo. You don't happen to have an explanation why these two Haswell-Goodies aren't available in the "normal" Haswell-Versions but only in Iris-versions?
 
Seems like you've got a lot of insight into the new Haswell stuff nAo. You don't happen to have an explanation why these two Haswell-Goodies aren't available in the "normal" Haswell-Versions but only in Iris-versions?
Iris does the rendering, doesn't it?

Really hope that we will see shader model 6 soon with this and many other improvements.
 
Iris is just a fancy name for... the integrated graphics in Haswell. As far as Intel has disclosed, Iris Graphics is functionally identical to the other Haswell graphics parts - it is just faster (40 EUs instead of 20) and in case of *draw deep breath* Intel Iris Pro Graphics 5200 aka GT3e it has 128 MiB of eDRAM to speed up proceedings.
 
Seems like you've got a lot of insight into the new Haswell stuff nAo. You don't happen to have an explanation why these two Haswell-Goodies aren't available in the "normal" Haswell-Versions but only in Iris-versions?
As I said, I don't think that's true. If it is, it's just a judgement call on performance in that specific engine. The extensions are available on any Haswell GPU.

nAo and I both worked on this stuff (see authors for the papers I linked), hence the "insight" :)
 
So is AMD and NVIDIA hardware physically not capable of this, or have they not implemented it in software since it is not part of the D3D spec?

Also, generally speaking, I don't expect Haswell IGP to run this game at with performance/IQ comparable to midrange discrete cards. There is still an enormous performance gulf between Haswell GT3(e) and an HD7850 or GTX660. Although I guess the smoke will look better on the Intel IGP regardless...

BTW Andy I think it's great that you're pushing the limits of the API. Of course I wish this was available on all hardware, and I'm sure you feel the same. :smile:
 
As I said, I don't think that's true. If it is, it's just a judgement call on performance in that specific engine. The extensions are available on any Haswell GPU.

nAo and I both worked on this stuff (see authors for the papers I linked), hence the "insight" :)
I know, so I was hoping you could be tricked into leaking more information. ;) Seriously - I'd welcome if I'm wrong and those features are indeed available on any Haswell GPU.

Not to break any NDAs here, so I'll wait until it lifts before commenting on what's available with which final shipping products, but for starters Grid's readme explicitly said on Intel Iris Pro/5200 graphics. OK, maybe that was preliminary information. I'll double-check on monday when I'm back in the office.

Haswell CPUs are available in german e-tail anyway. :)
 
Hmm, it would be strange if this feature was limited to GT3e parts. The eDRAM shouldn't do anything more than increase performance as far as I can tell.

Of course it's doubtful any non GT3e parts can run the game at such high settings, but in theory it should be possible right?
 
Intel Iris Pro Graphics adds eDRAM – As the highest-performing of the 4th gen processors, the Intel
Iris Pro Graphics 5200 adds a large on-chip, last-level cache, made with eDRAM. Simply use cache-
friendly access patterns to take advantage of this eDRAM; no additional work is required. This can
give a large performance gain to memory bandwidth-heavy operations like particle blending, post-
processing, and other “write-then-read” operations, like shadow or reflection map rendering.

Iris Pro has great support for smoke and mirrors :)
 
So the plot thickens? The Mojo is in the GPU, but marketing decides only the Iris-banded SKUs are allowed to weave their magic?
 
Why Intel don't support 2xMSAA natively by the way?
Mostly because it's not required by DirectX, which requires only 4x and 8x, with some format omissions on 8x. In the past Intel has mostly just designed to the spec. I believe the drivers are forced to emulate 2x using 4x though since some games do not properly check for hardware support. In any case 2x will be natively supported in hardware in the future.

That said, and not to defend the omission, but 2x is probably the least useful MSAA mode (hence it not being required). It's like "AA" only in one axis :S It looks way worse than 4x and barely better than no AA in a lot of cases. 8x conversely barely looks better than 4x but costs way more (most hardware and compression is designed mostly for 4x). 16x however does look notably better than 4x.
 
Last edited by a moderator:
Back
Top