The Intel Execution in [2023]

Status
Not open for further replies.
It's only for laptops because they couldn't get them fast enough for desktop (aka it hits clock wall)
It was supposed to be desktop too.
I heard that rumour too, and it's probably true, although that still doesn't make for a bad product, just tells me they prioritized the low power end of things and maybe thought they could still squeeze the frequency out of it, but then realized they couldn't.

Transistor type is just one specific decision point they have to make, but pretty easy to see how this would happen if they chose more of the 2-1 and 2-2 type to optimize for power and density:

(and yes I know the compute tile is made on the Intel 4 process and not TSMC, but Intel's foundry side has similar options too)
 
Really just quite lousy. Even ignoring the lack of actual architectural improvements on the CPU side, Intel 4 seems to offer very little in the way of actual efficiency gains. And I think it's cuz of what I was talking about earlier - they are rushing, and thus aren't actually waiting for the processes to mature and scale in a way that allows them to meaningfully improve their products for customers. I'd bet Intel 3 will be a nice improvement, but consumers wont benefit from this because they're not using it for consumer products. I fully expect 20A to have a similar issue, all while Lunar Lake will probably bring actual good CPU efficiency gains thanks to TSMC.

The GPU looks decent, but not exactly anything that spectacular. Still behind AMD in actual games, and really only seems to be anything decent at all cuz they've packed in enough cores and are using highly mature TSMC 5nm to get decent clocks. It also looks like these Meteor Lake w/Arc GPU's are gonna be quite expensive, which is probably another problem they have - this chiplet strategy is likely much more expensive than their monolithic strategy.

I think 2024 is gonna be quite a bad year for Intel overall. Maybe they can get Battlemage GPU's out sometime by summer or so, but they'll probably be going up against RDNA4 at the same time. And Zen 5 is just gonna eat Intel's lunch in basically every part of the market. Undoubtedly Intel is making lot so progress behind the scenes, but it feels like it's gonna be a while yet before they can start demonstrating that in actual products.
 
The GPU looks decent, but not exactly anything that spectacular. Still behind AMD in actual games, and really only seems to be anything decent at all cuz they've packed in enough cores and are using highly mature TSMC 5nm to get decent clocks. It also looks like these Meteor Lake w/Arc GPU's are gonna be quite expensive, which is probably another problem they have - this chiplet strategy is likely much more expensive than their monolithic strategy.

I think 2024 is gonna be quite a bad year for Intel overall. Maybe they can get Battlemage GPU's out sometime by summer or so, but they'll probably be going up against RDNA4 at the same time. And Zen 5 is just gonna eat Intel's lunch in basically every part of the market. Undoubtedly Intel is making lot so progress behind the scenes, but it feels like it's gonna be a while yet before they can start demonstrating that in actual products.
The only way Intel can ever be competitive in GPU performance is if they start cutting features, capabilities and functionality on their egregiously over engineered graphics hardware ...

There at least 3 different SIMD width modes which are SIMD8/SIMD16 and SIMD32 is incompatible with ray tracing ?! The hardware design supports both SIMT and SIMD programming models when most hardware vendors exclusively pick one model or the other and be done with it. Their hardware also has two different paths for tiled/sparse resources with one of them only been compatible with the graphics queue! They might want to reevaluate if the complexity of their hardware ray tracing implementation is worthwhile if big new projects on the biggest game engine keep being released without making use of the feature. Do they keep supporting XMX units in future hardware designs if no ML frameworks like PyTorch offers official support and AAA games won't consistently support XeSS ?
 
The only way Intel can ever be competitive in GPU performance is if they start cutting features, capabilities and functionality on their egregiously over engineered graphics hardware ...

There at least 3 different SIMD width modes which are SIMD8/SIMD16 and SIMD32 is incompatible with ray tracing ?! The hardware design supports both SIMT and SIMD programming models when most hardware vendors exclusively pick one model or the other and be done with it. Their hardware also has two different paths for tiled/sparse resources with one of them only been compatible with the graphics queue! They might want to reevaluate if the complexity of their hardware ray tracing implementation is worthwhile if big new projects on the biggest game engine keep being released without making use of the feature. Do they keep supporting XMX units in future hardware designs if no ML frameworks like PyTorch offers official support and AAA games won't consistently support XeSS ?
Weird takeaway to me, honestly. It feels like Intel is already on AMD's heels in terms of GPU technology on equal process footing. AMD should be quite scared that Intel can achieve nearly what they have without decades of trying in this field. This absolutely seems like something Intel can figure out, as they really dont need much more to get ahead.

It's their CPU and process nodes that really feel like they're not keeping up in the more important markets.
 
Weird takeaway to me, honestly. It feels like Intel is already on AMD's heels in terms of GPU technology on equal process footing. AMD should be quite scared that Intel can achieve nearly what they have without decades of trying in this field. This absolutely seems like something Intel can figure out, as they really dont need much more to get ahead.
@Bold Have you seen the die sizes of Intel graphics hardware to earnestly make that statement ?

Their nearest competitor with just half of the amount area logic on the exact same process technology is able to match the aggregate performance of their own highest end option ...
 
AMD should be quite scared that Intel can achieve nearly what they have without decades of trying in this field. This absolutely seems like something Intel can figure out, as they really dont need much more to get ahead.
You really need to compare anything GPU with a magical power-perf-area triangle.
Their nearest competitor with just half of the amount area logic on the exact same process technology is able to match the aggregate performance of their own highest end option ...
see he gets it.
 
@Bold Have you seen the die sizes of Intel graphics hardware to earnestly make that statement ?

Their nearest competitor with just half of the amount area logic on the exact same process technology is able to match the aggregate performance of their own highest end option ...
I've been critical of Intel's lack of performance per mm², no doubt. But unless it's resulting in high pricing, it's not anything consumers care about. And with AMD seemingly putting higher end GPU's on hiatus for the time being, it absolutely allows Intel to get in there and start competing with them.

Either way, consumer GPU stuff is still generally small fry for both these companies.
 
Maybe Intel should shim OpenXR so they can force ExtraSS for VR, that seems the most likely usecase.

PS. I see OpenXR even has official support for layering the API. Don't even need to use ugly DLL hacks.

PPS. Oh, it uses the g-buffer. Guess this would need to be in the game engine or needs a new OpenXR extension.
 
Last edited:
Status
Not open for further replies.
Back
Top