Sandy Bridge preview

According to Dresdenboy's articles, 2MB of cache was supposed to be reserved for the GPU. Any idea what that might be used for?

I am curious to know if the gpu can access the data (vertex data, for instance) from directly from CPU's cache. IMHO, it would be VERY impressive and desirable if it were possible.
 
According to Dresdenboy's articles, 2MB of cache was supposed to be reserved for the GPU. Any idea what that might be used for?

I am curious to know if the gpu can access the data (vertex data, for instance) from directly from CPU's cache. IMHO, it would be VERY impressive and desirable if it were possible.

Are you saying the 2MB of cache difference I'm seeing is reserved, and the 6MB variant doesn't have GPU support perhaps, or is it 2MB all the way down the line? In either case, I'm curious to know if that is a difference enough in itself to justify the next tiered purchase.
 
Are you saying the 2MB of cache difference I'm seeing is reserved, and the 6MB variant doesn't have GPU support perhaps, or is it 2MB all the way down the line? In either case, I'm curious to know if that is a difference enough in itself to justify the next tiered purchase.

My first guess upon seeing 2 variants with 6MB and 8MB cache was that perhaps the 8MB one doesn't have a GPU. That seems not to be the case.
 
Oh, absolutely and I think the incredibly smart way turbo is managed is a big part of it as well. These chips deliver the power where your machine needs it in order to make the most out of those 35w, the whole idea behind an aggressive turbo mode split between the CPU and GPU just makes so much sense for mobile computers.

You know Arrandale already does the load sharing between the CPU and GPU. ;)

The 2MB being dedicated was a pure WAG by some people that were trying to analyze the die. It looks like that's not it...
 
You know Arrandale already does the load sharing between the CPU and GPU. ;)

The 2MB being dedicated was a pure WAG by some people that were trying to analyze the die. It looks like that's not it...


Yes, but the turbo mode wasn't nearly as aggressive and the IGP wasn't all that useful either.
 
Yes, but the turbo mode wasn't nearly as aggressive and the IGP wasn't all that useful either.

Depends on the part -- the "LM" lines (25W TDP) would start the IGP at 166 and turbo them all the way to 566; the CPU core also start pretty low (~15x multi) but can increase by 4x / 6x based on dual core / single core use. I think this is a pretty fair precursor to what SB is going to do across all their lines.

The new i7-660LM is very interesting to me as an alternative to the i5-520M. The LM has 133mhz less 'stock' clock speed, but consumes 10W less and is capable of turboing quite a bit higher and more consistently. It also has 33% more cache, although the IGP does start at a far lower clock.

As for the IGP being useful in the Arrandales? As the owner of a switchable-graphics device with an Arrandale 520M, I find it far more useful than any prior IGP I've ever used. I've even played Fallout 3 with it, albeit at medium details, without really any complaint.
 
Probably it is not a part of the GPU complex at all, sort of.
It's located near the NB/PCIe controllers, so I guess it will be sync'ed at the uncore clock domain and will interface with the rest of the clients (CPU cores and GPU) on the same L3 ring-bus.
 
I had imagined it will be tightly integrated with the gpu/decoder block which will almost always be used simultaneously.
 
I'm not comfy with the strong possibility of overclocking getting the curb treatment.

BUT, that being said, I'm very excited about having one of these in an laptop. Given all the improvements in raw performance, video decode, GPU, and most importantly -- the ability for power to be shifted across all those parts in a very fluid way, I think this will make a killer mobile platform.

So, Gulftown for my desktop, and SB for my laptop. :)
 
I don't understand the purpose.

Video transcode was one of those 'killer apps' for GPU's -- but a GPU at full tilt transcoding your 1080p video, while fast, is also consuming egregious amounts of power. The little tiny transcode engine in SB will use FAR less power, while appearing to be similar in performance.

So, the purpose is simply for Intel to stick it to the GPU manufacturers, although primarily NV.
 
Video transcode was one of those 'killer apps' for GPU's -- but a GPU at full tilt transcoding your 1080p video, while fast, is also consuming egregious amounts of power. The little tiny transcode engine in SB will use FAR less power, while appearing to be similar in performance.

So, the purpose is simply for Intel to stick it to the GPU manufacturers, although primarily NV.
I don't think that the blow is only adressed at Nvidia, video transcoding is useful for mister casual transcoding video for his smartphone/tablet etc. Intel will have a huge competitive advantage against AMD in this regard.

It will also allow Intel to lower the number of cores, clock speed while still offering impressive perfs. It's one of the only compute intensive task nowadays users (non gamers) use their computer for.

I wonder how would fare a single core SND + 6 EU against AMD upcoming offering (Ontario line).
 
Last edited by a moderator:
Did somebody make an estimation on the SND die size?
Hardware.fr has shot of a wafer if it helps.
 
The only reason I singled out NVIDIA on the video transcode is, quite frankly, to thop's point precisely: ATI / AMD's presence is certainly there, but has absolutely nowhere near the mass of NVIDIA in this market.

That, and given Jen-Hsun's recent blatherings about their GPU's invalidating CPU's (and one of the key notes being their utter dominance of video transcode), I think it's funny that Intel blasted this across their bow :)
 
Back
Top