Llano IGP vs SNB IGP vs IVB IGP

The architecture really needs to have a sufficient L3/core ratio. Each core needs a tile of L3 if it's to be wortwhile.
Oh, not disputing that. Just pointing out it's not quite correct if you are looking at cpu/gpu ratio and solely attributing the L3 cache to the cpu.

The GPU uses the L3 as well, though I have not seen an analysis of how much.
Not sure neither but I'm quite sure it's one of the reasons SNB is faster than Arrendale IGP.

It is interesting that Intel is maintaining the very rectangular die shapes for these smaller dies. Perhaps it is the ever-expanding IO capability or some quirk of the Intel's modular design.
I think modular design plays a role (just like with SNB). Makes it easy to add/remove cores without thinking too much and without wasting too much die area.
 
Mczak, it looks like the recent Sandy Bridge graphics driver have improved throughput in the 3DMark06 batch size testing at lower sizes. The improvement is at 8/32/128 Triangle sizes, and overall, the gains are roughly 20%. What's the significance of this? There are quite a few DX games that gained performance due to the new driver, and I'm thinking its related to the batch performance improvement.
 
Mczak, it looks like the recent Sandy Bridge graphics driver have improved throughput in the 3DMark06 batch size testing at lower sizes. The improvement is at 8/32/128 Triangle sizes, and overall, the gains are roughly 20%. What's the significance of this? There are quite a few DX games that gained performance due to the new driver, and I'm thinking its related to the batch performance improvement.

A good thing Intel is finally improving their drivers. Hopefully within 12 months their drivers should be decent enough for some games.
 
A good thing Intel is finally improving their drivers. Hopefully within 12 months their drivers should be decent enough for some games.

You are certainly exaggerating on the playability for the Sandy Bridge IGP. No way related to the question too.

Does anyone know what low triangle batch size performance have to do with performance? I couldn't get anything more than that it has to do with "optimized performance with larger sizes" from few info I could find.
 
Need more detail but I was actually quite impressed with nearly getting 30fps on RE5 at 1080p, and 50fps with SFIV at 1080p. That's fantastic for an integrated part, nothing comes close to it at the moment if those numbers are accurate with respect to integrated GPU performance.
 
I'll echo the sentiments expressed thus far - the game benchmarks are quite impressive while the CPU-bound benchmarks are not. Guess we'll have to wait for the next generation hardware to see some real powerhouse results all around.
 
First generation of APUs are more impressive than I expected, not only they made low-end cards obsolete, but also mid-range too. Few generations later only the most hardcore gamers will need discreet cards, everyone else will be gaming with APU's.

Unfortunately for Nvidia, the only gaming mainstream APUs in town are AMDs, Intel hardly counts. Of course NV will have ARM+GPU in Maxwell, but not compatible with x86 will make it niche product. Perfect for tablets, hardly suitable for current PC gamers.
 
First generation of APUs are more impressive than I expected, not only they made low-end cards obsolete, but also mid-range too. Few generations later only the most hardcore gamers will need discreet cards, everyone else will be gaming with APU's.

Unfortunately for Nvidia, the only gaming mainstream APUs in town are AMDs, Intel hardly counts. Of course NV will have ARM+GPU in Maxwell, but not compatible with x86 will make it niche product. Perfect for tablets, hardly suitable for current PC gamers.

I somehow doubt Maxwell will be anywhere near suitable for the tablet market: we're talking about an architecture that's bound to be very HPC-oriented, here.

But actually, it's fortunate for NVIDIA that Intel's integrated graphics blows: it means they can still sell mid-range graphics cards to Intel users.
 
I somehow doubt Maxwell will be anywhere near suitable for the tablet market: we're talking about an architecture that's bound to be very HPC-oriented, here.
It would make sense if Maxwell would be top to bottom, like AMD is doing (from 5W to 100W APUs). Although NV mentioned Maxwell with HPC mainly, so they may keep Tegra line separate, who knows.
 
It would make sense if Maxwell would be top to bottom, like AMD is doing (from 5W to 100W APUs). Although NV mentioned Maxwell with HPC mainly, so they may keep Tegra line separate, who knows.

Denver would certainly show up in tablets and smatphones.

Maxwell's derivative might show up in phones/tablets 2-3 years after it's desktop cousin.
 
It would make sense if Maxwell would be top to bottom, like AMD is doing (from 5W to 100W APUs). Although NV mentioned Maxwell with HPC mainly, so they may keep Tegra line separate, who knows.

That's what I expect, yes. Over time, Tegra may start to integrate a few features from NVIDIA's current (or recent) GPUs, but I wouldn't expect convergence any time soon.
 
Denver would certainly show up in tablets and smatphones.

Maxwell's derivative might show up in phones/tablets 2-3 years after it's desktop cousin.
I agree, it still depends how efficient new gen NV GPU will be. They should have learned from Fermi debacle and if new chips are very efficient - they'll make to tablets sooner than later. Unified approach would cut down R&D and products time to market, and recent focus on design structure of "lego blocks" should make scaling easier.

That said, is there a mainstream market for desktop Maxwell? AMD and Intel APU's have vast x86 market, who really wants ARM on their PCs? Win8 should help, but lack of native apps (outside mobile market) will really make NV ARM adoption slow. MS is promising emulation of x86 on WARM, but these CPUs are slow for anything more serious than office and internet (even speaking of A15), and emulation wont help with speed either. Sure there will be few enthusiasts, but no mass adoption, probably ever.
 
Actually, MS has promised otherwise.
You right. I remembered how Intel claimed Win8 wont run legacy software on ARM devices, and Microsoft responded how Intel is wrong and misleading, but I missed footprint where MS said legacy apps will run on x86, but not ARM :LOL:

Scratch my point above how slow NV ARM adoption will be on mainstream PC market, it simply wont happen.
 
Sigh, I really post too much off-topic stuff these days but...
I somehow doubt Maxwell will be anywhere near suitable for the tablet market: we're talking about an architecture that's bound to be very HPC-oriented, here.
First is it very important to dissociate the Maxwell GPU core and the Denver CPU core, which are both used in the first Maxwell-based chip. Not every Maxwell chip will necessarily use Denver. I'm not sure that's the way NVIDIA think about their naming scheme (which in the G9x/GT2xx generation one of their top engineers told someone I know he couldn't keep up with anyway), but I can't find any other way to dissociate the two clearly.

Maxwell's GPU and the system architecture of that first chip are very HPC-oriented, but the Project Denver CPU itself is nearly certainly not. Remember the idea is to run the FP-heavy stuff on the GPU, not the CPU. I'd be very surprised if we had more than a single 128-bit FMA here - which Cortex-A15 already has!

As for the GPU, AFAIK the next-generation Tegra GPU is only coming in Logan which is likely slated for late 2012/early 2013 tape-out on 28HPM with 2H13 end-product availability. That will also be the first Tegra with Cortex-A15, as the 2012 Wayne is much more incremental. So the timeframe for next-gen Tegra GPU and the Maxwell GPU is surprisingly not that different, but the former comes up earlier than the latter and is one process node behind.

So I think architectural convergence is very very unlikely, unless it is the Maxwell GPU itself that is a next-gen Tegra GPU derivative, which would be completely crazy but rather in line with Jen-Hsun's insistence that Tegra is the future of the company and that performance will be much more limited by perf/watt than perf/mm² (and already is).

As for ARM CPU adoption on PCs... I think there's a strong possibility that many notebooks will evolve towards also having a touchscreen over time. That makes Metro UI and the like more attractive, and significantly reduces the relative appeal of legacy application compatibility. But yeah, desktops? No way. Maybe hell has already frozen over now that Duke Nukem Forever is released, but there's no way desktops are ever switching to ARM. Maybe some niche 'desktop' functions like Windows HTPCs, but that's more likely to migrate towards ARM by moing away away from Windows anyway.

Back on topic:
But actually, it's fortunate for NVIDIA that Intel's integrated graphics blows: it means they can still sell mid-range graphics cards to Intel users.
Yeah, 64-bit discrete GPUs are clearly a thing of the past though.

Llano is very impressive, but I wonder how bandwidth limited it really is, I really wish someone benchmarked it with different DDR3 module speeds. If it's very limited, then there may not be much room to grow before DDR4 becomes mainstream, or some other clever trick is used (silicon interposers as rumoured for Intel Haswell?)
 
Scratch my point above how slow NV ARM adoption will be on mainstream PC market, it simply wont happen.

Oh I don't know about that, the mainstream (consumer) PC market is rarely concerned with legacy apps AFAICS. If ARM adoption gains traction in the notebook market, there isn't really a reason it can't be successful in the desktop market as well. Of course, there will always be users who simply must run legacy software...
 
Back
Top