Intel Gen9 Skylake

Luckily, I'm all set with a duo of RHB740 directly from Intel. But a 90-ish watt Alan Wake would be nice.
 
Thanks for the slides,
they confirm EDRAM for a 65+ watt CPU, but you can still only buy a 45W one to date (Xeon E3) sigh.
Nice info on other aspects as well.

Cheers
 
As we know for a longer time 65W GT4e for desktop is coming between Q4-16'-Q2-17'
Yeah but that is a wait that is unacceptable considering it is not a technical problem, and it does have benefits in games (for some anyway) even with a discrete GPU.
But unfortunately many of us would like it on an I7 model and looks like that is not going to happen, also by Q4 2016 we should see Knights Landing which is what I wonder will be the 1st consumer ones at that 65W with EDRAM.

I get the feeling releasing Broadwell kinda screwed every customer over as from a business perspective they needed to differentiate it from Skylake to some degree and much of what we are seeing is business decisions rather than engineering-technical ones with delaying functionality-spec of Skylake generations.

But in a capitalist-consumer world, they are learning (or not it seems) people do not automatically buy for minimal steps and one reason PC market sales keep dropping, and on the business side there is still some stalling which I wonder will affect them also.
Cheers
 
also by Q4 2016 we should see Knights Landing which is what I wonder will be the 1st consumer ones at that 65W with EDRAM.
Pretty sure you meant Kaby Lake, aka. Baby Cake aka Alan Wake, but not Knights Landing, which is a Many-Core-Xeon.
 
On my HD 630 with driver 21.20.16.4589 there is support for StandardSwizzle64KB.


Direct3D 12 feature checker (July 2015) by DmitryKo
https://forum.beyond3d.com/posts/1840641/

ADAPTER 0
"Intel(R) HD Graphics 630"
VEN_8086, DEV_5912, SUBSYS_86941043, REV_04
Dedicated video memory : 134217728 bytes
Total video memory : 4294901760 bytes
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_16_BIT (2)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 1
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_3 (3)
StandardSwizzle64KBSupported : 1
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 1
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 38
MaxGPUVirtualAddressBitsPerProcess : 48
Adapter Node 0: TileBasedRenderer: 0, UMA: 1, CacheCoherentUMA: 1
 
According to compubench (https://compubench.com Click iGPU, hunt through the Info tabs on several of the top performing)
intel has added cl_khr_fp16 & cl_khr_fp64 to their opencl GPU driver! YES!

Does any one have any idea of the performance of cl_khr_fp16 & cl_khr_fp64?
is fp16 = 2x fp32? fp64 = 1/2 fp32?
I can see numbers for fp32 performance on phoronix (https://www.phoronix.com/scan.php?page=news_item&px=Fresh-Beignets-Clear-Linux-2017)
that have a Core i5 7600K at 340.75 Gflops

In the Beignet docs it states
"Precision issue.
Currently Gen does not provide native support of high precision math functions
required by OpenCL. We provide a software version to achieve high precision,
which you can turn off through
`# export OCL_STRICT_CONFORMANCE=0`.
This loses some precision but gains performance."

I'd be interested in how intel gpu and cpu (avx512) hardware compares to nvidia and amd in terms of hardware fp64 precision. especially rsqrt vs native_rsqrt

How would an apollo lake gpu compare with a Jetson TK1 for fp32 and fp16 Gflops for embedded deep learning. (give the software was ever written for the apollo lake)

Regards Michael
 
I'll have to try that driver out.

I've been playing around with my laptop and tried undervolting. Seems like my processor can't handle the boost clock on the cpu while running the gpu at full speed. Definitely a struggle to get full performance out of these things.
 
I'll have to try that driver out.

I've been playing around with my laptop and tried undervolting. Seems like my processor can't handle the boost clock on the cpu while running the gpu at full speed. Definitely a struggle to get full performance out of these things.
You ain't seen nothing until you have witnessed a Skylake GT3+EDRAM chip in a 18W configuration trying to play 3D games.
 
Last edited:
Back
Top