Intel Broadwell for desktops

That's the one thing I don't like about these new CPU's. Too much GPU. Don't get me wrong, the GPU performance is amazing if that's what you want but more serious gamers are going to be getting a DGPU. But look at that die shot, it seems about 2/3rd of what I'm paying for in the CPU will actually never be used by me.
DirectX 12 explicit multiadapter will make it straightforward to offload the tail of your frame to the integrated GPU. As integrated GPUs are present in almost all PC processors (both Intel and AMD), I expect many developers to support this.

Unreal Engine preliminary results:
http://blogs.msdn.com/b/directx/arc...rmant-silicon-and-making-it-work-for-you.aspx

DirectX 12 will also greatly reduce the CPU usage (driver overhead) of rendering code on PC. You will no longer be limited by a quad core CPU in any current generation console port (even at high settings + 60 fps). I would say that the integrated GPU is more useful than the 4 extra cores in games/engines that support DirectX 12 explicit multiadapter.

Unfortunately, my pessimistic assumption is that many developers are sticking with DirectX 11 for their PC ports. This is going to cause problems. The draw call submission threads overloads one CPU core, the other graphics thread that prepares the draw calls to a linear list (culling, etc) takes another core (or is spread around multiple small tasks that take roughly that amount of total CPU time). The graphics driver overloads 1-2 CPU cores and there's not much left for the game logic, AI, destruction and physics. Let's see how it goes :)
 
Just search for it in eBay. I've got mine from the local classifieds.
The price will vary in a wide range. The lowest I've stumbled once was $60, no warranty, just a 3-day money back. Mine was taken down from a small-scale corporate mail server.

I'm hoping I'll be able to do the same for LGA-2011 within 3-4 years, getting a 8-10 core Xeon E5 for that much.



I wonder when we'll have preliminary results of DX12 Multi Adapter from other major engines like for example Frostbite ;)
 
DirectX 12 explicit multiadapter will make it straightforward to offload the tail of your frame to the integrated GPU. As integrated GPUs are present in almost all PC processors (both Intel and AMD), I expect many developers to support this.

Unreal Engine preliminary results:
http://blogs.msdn.com/b/directx/arc...rmant-silicon-and-making-it-work-for-you.aspx

DirectX 12 will also greatly reduce the CPU usage (driver overhead) of rendering code on PC. You will no longer be limited by a quad core CPU in any current generation console port (even at high settings + 60 fps). I would say that the integrated GPU is more useful than the 4 extra cores in games/engines that support DirectX 12 explicit multiadapter.

Unfortunately, my pessimistic assumption is that many developers are sticking with DirectX 11 for their PC ports. This is going to cause problems. The draw call submission threads overloads one CPU core, the other graphics thread that prepares the draw calls to a linear list (culling, etc) takes another core (or is spread around multiple small tasks that take roughly that amount of total CPU time). The graphics driver overloads 1-2 CPU cores and there's not much left for the game logic, AI, destruction and physics. Let's see how it goes :)

Thanks that's extremely interesting. Multi-adapter is exactly what I've been waiting for for a long time. I know AMD promised this with Mantle but it seemed obvious back then that it wouldn't take off (under Mantle) so I'm encouraged by your optimism around the developer support (the entrenchment of DX11 notwithstanding).

I watched Max's talk and one point jumped out around the main GPU metering the work that would be passed over to the secondary GPU on a real time basis to ensure the secondary GPU would alsways take less time to complete it's part of the frame. In other words auto balancing the workload regardless of the relative performance of you're GPU's. If I've interpreted that correctly it would be pretty awesome.

It'd be pretty interesting to see an i5 5675C beating out an i7 5960x because it's iGPU
 
I'm very skeptical about multi-adapter. Game developers do a pretty poor job of supporting basic playability - how many games launch missing significant content, or with significant bugs? I just can't see game developers managing multiple asymmetric GPUs robustly. More likely, multi-adapter will be like hybrid crossfire: a feature that people chat about on forums, but without much real impact.
 
I'm very skeptical about multi-adapter. Game developers do a pretty poor job of supporting basic playability - how many games launch missing significant content, or with significant bugs? I just can't see game developers managing multiple asymmetric GPUs robustly.
I've been generally skeptical as well but there is one thing that shifts the balance considerably IMHO: async compute. Devs are really forced to do it for consoles, and once you've done the work to do async compute, it's fairly easy to test running that compute stuff on the iGPU and you have the advantage that if you do end up needing any 3D work, you can do that "async" on the iGPU as well.

So time will tell, but devs are already doing the majority of the work for consoles/GCN anyways.
 
Last edited:
Isn't it also that multiadapter is DX12, and therefore harder than DX11, and therefore it will be the big engines that support it?
 
I've been generally skeptical as well but there is one thing that shifts the balance considerable IMHO: async compute. Devs are really forced to do it for consoles, and once you've done the work to do async compute, it's fairly easy to test running that compute stuff on the iGPU and you have the advantage that if you do end up needing any 3D work, you can do that "async" on the iGPU as well.

So time will tell, but devs are already doing the majority of the work for consoles/GCN anyways.

This would also bypass the requirement for a low latency link between the CPU and GPU too if I'm not mistaken? The best of all worlds in a way, you get the very low latency communication between CPU and GPU using the iGPU for those tasks that need it and you get the big rendering performance of a dGPU for tasks that aren't dependent on that low latency link.

That's my idealized view of the world anyway.
 
I was wondering if for multiadapter is making a local copy of data a necessity or can you work directly out a different video cards memory directly? (not that thats always optimal) Does anyone know?
 
Now don't you guys make me regret the investment on a X79 system for my main gaming rig!
 
I was wondering if for multiadapter is making a local copy of data a necessity or can you work directly out a different video cards memory directly? (not that thats always optimal) Does anyone know?

It can work directly out of another GPU's memory according to Max's video - that would mean system memory for an iGPU.
 
It's fine, but no Haswell and DDR4 bliss for you. :p

Meeeh.. Haswell's IPC upgrade is tangential at best and as for DDR4, I already have >50GB/s RAM bandwidth with 4 quad channel using super-cheap memory :)
 
Yeah, I'm not looking forward to replacing my cheap memory (32 GB for ~110 USD back in 2012) when I upgrade to Skylake later this year or next.

Regards,
SB
 
Yeah, I'm not looking forward to replacing my cheap memory (32 GB for ~110 USD back in 2012) when I upgrade to Skylake later this year or next.
Yeah those were the good days of memory prices :) On the plus side the delta between DDR3 and DDR4 has fallen significantly in the past few months... for 2x8GB it's now in the range of ~$170 for DDR3 and $195 for DDR4. Ex.
http://www.ncixus.com/products/?sku=100645&vpn=CT4K4G4DFS8213&manufacture=CRUCIAL TECHNOLOGY

Can always get cheaper, but that's not nearly as bad as it was last year anymore.
 
Cheapest DDR3 chips in here are still being sold for over 8€/GB.
It's still a lot, when compared to the sub-3€/GB we had in 2012.
 
Back
Top