AMD's answer to Larrabee: SSE5 vs. AVX? SPECULATION

The biggest changes that I've seen for DX11 is the addition of programmable tessellation, together with the associated pipeline stages ("control points shader" and what not before the vertex shader). I guess this is what Rys is referring to. (Note that there was some relevant info in AMD's slides from I3D 2008... unfortunately the link appears to be dead on their site.) Then again, this article seems to model the DX11 tessellation pipeline slightly differently, so perhaps there have been some changes over time.

It's an interesting choice to go this route and kind of imples that DX is designed to be "an API for graphics" rather than "and API for controlling the hardware". Perhaps not revolutionary, but it certainly leaves a big gap for stuff like CUDA/PTX and CAL/CTM, in addition to abstractions that sit on top of these like RapidMind/Brook.

Haven't heard much about the OM plans... anyone have links?

Anyways, regarding the linked interview/article, it kind of missed the point on a few levels:
"You're not going to have 8 GPUs, but you are going to have 8 cores"
... uhh... wow.
"Aside from learning how to multi-thread your game engine, that's it - you already know how to program a CPU"
Lol! Apparently parallelism is easy, it's just the GPUs that make it hard with all of their complicated 'shader models' and what not ;) That's probably not what he intended to say, but it came out kind of funny :)

It wasn't a terrible interview, but didn't really address anything relevant to GPU vs CPU architecture enough to draw the conclusions that they did. I do agree with his closing comments about lots of interesting stuff being possible when we get to break down the current graphics pipeline a bit more, but I think that's kind of vacuously true.
 
Last edited by a moderator:
Interesting article. Not quite sure what parts of "DX11 tessellation" requires hardware and what parts are simply API/driver changes. Hopefully we don't get another virtually useless pipeline stage like GS.

BTW, isn't DX11 going to add the "compute shader" for general purpose computation?

Not sure exactly what that means, but given that we already have stream out from VS, and unified texture fetch, I'm guessing that "compute shader" means you can write to non-fixed locations of memory.
 
Interesting article. Not quite sure what parts of "DX11 tessellation" requires hardware and what parts are simply API/driver changes.
Presumably this will be left up to the IHVs. Clearly ATI thinks that there should be some hardware in there, but I'd wager that Intel may not agree. NVIDIA... hard to say. TBH though I think the API may be straying a bit far if 2 of the 3 major vendors (assuming Intel pulls something together by DX11's release) implement API functionality in software/drivers.

Hopefully we don't get another virtually useless pipeline stage like GS.
Indeed. Beyond doing a few primitive-level calculations, and theoretically perhaps sending triangles to different rendering targets (not really necessary to be fair), I don't have much use for the GS. There are a few operations that it's certainly really helpful for, but it's unclear to me that we couldn't implement those just as efficiently in software (via R2VB, etc).

Not sure exactly what that means, but given that we already have stream out from VS, and unified texture fetch, I'm guessing that "compute shader" means you can write to non-fixed locations of memory.
That's a possibility, although you can already do that fairly efficiently with the vertex shader using point primitives. May not be the most intuitive from an API-standpoint, but it's one shader, it works, and it's very fast.
 
These questions came to me because Intel is serious about getting rid of the GPU and dependence on Direct X for 3D graphics. They want 100% x86 CPU rendering/ray tracing to happen ASAP. I'm asking here because I have witnessed the huge amount of technical backgrounds and knowledge of these forum members and wanted to get a discussion on these topics started.

There's no question that Intel's serious about marketing its cpus and hasn't been serious about marketing gpus since its dismal showing with the AGP-texturing hobbled i7xx gpus that it foisted to compete with the likes of 3dfx and nVidia several years ago. At the time, neither nVidia or 3dfx cared much about using the AGP system bus for texturing simply because using their own discrete on board local ram buses for texturing was many, many times faster than the fastest AGP incarnation Intel could bring to market. Intel abandoned the discrete gpu market at that time and has remained aloof from it ever since, preferring to tie it's own very weak IGP into the Intel chip systems it sells, instead. I think you can divine much about Intel's actual intentions with Larabee by Intel's historical approach to graphics in general, and I very much doubt that Larabee will be any different when and if it ever sees the light of day.

Even when discrete 3dfx and nVidia 3d cards were literally running rings around i7xx, as demonstrated in benchmark after benchmark at the time, Intel was still professing the "superiority" of the system-based, AGP-texturing dependent graphics it was selling in i7xx--right up until Intel suddenly quit the market completely due to the undeniable lack of demand for its competitively under performing discrete 3d gpus. I recall owning at least two of the discrete i7xx cards and downloading my drivers directly from Intel. The experience in comparison with my experience at the time with 3dfx so soured me on the system-dependent 3d that Intel seems compelled to offer that I've never forgotten it.

In a word, the minute that the ram requirements exceeded the 2-4 mb local frame buffers featured on the i7xx gpus, performance slowed to a literal crawl whenever the AGP memory had to be accessed for texturing. In a word, the "solution" Intel brought to market to compete with 3dfx and nVidia at the time turned out to be no workable or practical solution at all.

What I've read about Larabee so far convinces me that Intel hasn't changed a whole lot in the intervening years, and is still trying to use 3d graphics as a vehicle to move its cpus and systems. People really need to understand this aspect of Intel's manufacturing and marketing history if they want to truly understand the motivations behind all the pre-release Larabee publicity.

What Intel "wants" I believe, with respect to ray tracing, is to capture the minds of as many people as possible with the notion that when you buy Intel's cpus and systems of the near future you simply won't have to worry about gpus since Larabee will offer something better. IIRC, Intel made much the same noises about i7xx--that "AGP texturing" was going to render local bus discrete 3d-card texturing obsolete. As I say, that never happened nor did it ever at any time come close to happening.

It should also be noted that the PCIe graphics bus is, like AGP before it, not to be confused with a replacement for discrete local bus ram texturing, which is still the preference of both ATi and nVidia. Yes, the PCIe graphics bus is much faster than AGP ever was, but...the local bus pools of ram that both nVidia and ATi use today to texture from in the discrete 3d cards they bring to market are much, much faster, still.

A last word about ray-tracing--it's as old as the hills and twice as dusty...;) I was ray-tracing commercially for years before 3dfx shipped the first V1 running Glide, and several years before Microsoft jumped into the 3d game with D3d. The "real-time" ray tracing publicity from Intel that we read about far too often these days is not new or unique to Larabee. I first read about it when Intel released the P4 Extreme...heh...;) Intel-funded "studies" were even then putting out demos of the coming "ray-tracing" wave that Intel was implying was right around the corner for 3d games--uh, if only you buy Intel cpus and systems, of course...!

My advice is not to hold your breath waiting on this, but to understand that you are being snookered by a sophisticated marketing campaign the purpose of which is simply to sell Intel cpus and systems--today.
 
Last edited by a moderator:
There's no question that Intel's serious about marketing its cpus and hasn't been serious about marketing gpus since its dismal showing with the AGP-texturing hobbled i7xx gpus that it foisted to compete with the likes of 3dfx and nVidia several years ago. At the time, neither nVidia or 3dfx cared much about using the AGP system bus for texturing simply because using their own discrete on board local ram buses for texturing was many, many times faster than the fastest AGP incarnation Intel could bring to market.
I recall fans of Nvidia's "Riva" cards making a huge deal out of the fact that the Riva did proper AGP texturing while its 3dfx rivals didn't. Intel wasn't the only proponent of it by any means.

To be fair to Intel, there was a sudden massive price-crash in fast memory chips round about that time which caught almost everyone by surprise. 3dfx benefited enormously from that: originally Voodoo 1 was planned to be a $400 part, but with the sudden memory price drop it ended up selling for $300, which made it a commercially viable product all of a sudden. Every other manufacturer had aimed at a much lower performance point in order to make their product affordable in what they thought the market conditions were going to be, so 3dfx were suddenly in a beautiful position with no serious rival.

(This is the reason why John Carmack decided to make a version of Quake that ran natively on Rendition Verite hardware, but not one that ran natively on Voodoo - he was expecting Voodoo to be so expensive it would only ever be a niche product).

If fast memory prices had stayed high in the way that everyone expected them to, then AGP texturing would have been the only economically viable way to go for quite some time.
 
The marketing surrounding Larabee in the last two months tastes a lot like the IA64 marketing that preceded the release of the earliest Merced chips... Intel should really learn from past mistakes here. Like with IA64, which was also supposed to turn the world upside down, they are making claims that in 2010 when their Larabee chip finally shows up, that we're going to drop everything and start doing things in some new way. The statements they're making now will come back to haunt them ultimately. The vendors that took the most damage from Itanium over the last ten years were the ones that were Intel partners like SGI that bought their story hook line and sinker. The Itanium naysayers seem to have been right. Sun and IBM have both outlived Itanium thus far, ten years and counting. The early years, Intel was forecasting doom for anyone that didn't adopt Itanium for their workstations, referring to the other vendors chips as "proprietary", (as if Itanium wasn't!). Larabee could turn out to be fantastic, but something about the way they are going about all of this makes me fear another Itanic fiasco. If Intel is serious about graphics, they could start by fixing the problems with their existing drivers. Oops, maybe they're not THAT serious...
 
The marketing surrounding Larabee in the last two months tastes a lot like the IA64 marketing that preceded the release of the earliest Merced chips... Intel should really learn from past mistakes here. Like with IA64, which was also supposed to turn the world upside down, they are making claims that in 2010 when their Larabee chip finally shows up, that we're going to drop everything and start doing things in some new way.
They've also made statements that Larrabee will be good at the old way as well. I'm reserving judgement on that until we see it. Marketers' idea of "good" can differ quite a bit from the rest of the world.

The Itanium naysayers seem to have been right. Sun and IBM have both outlived Itanium thus far, ten years and counting.
It's hard to say you've outlived something that's still alive.
Itanium has done decently as of late, and a fair amount of Sun's fortunes are in x86 now.
The story is entering a new phase in a few years, where Intel's process lead will become more important when Itanium moves to leading-edge processes with x86, as well as significantly revamping the system architecture around the chips.
The big question is how much IBM really wants to keep its toes in hardware.

The early years, Intel was forecasting doom for anyone that didn't adopt Itanium for their workstations, referring to the other vendors chips as "proprietary", (as if Itanium wasn't!).
The real outcome was that a gigantic swath of that market was lost to cheap x86+graphics card systems before Itanium even made it out.
 
It's hard to say you've outlived something that's still alive.

Yes, I misspoke -- what I was trying to say is that they've greatly exceeded the longevity Intel had forecast for them. There was a great deal of the sort of "every processor design other than IA64/EPIC is history" rhetoric back then. The recent statements from Gelsinger about the 'graphics we know today is going away' smack of the same specious optimism in their next plan for world domination. I'll be pleasantly surprised if Larrabee turns out to be as rosy as they're painting it. Intel have picked up lots of good graphics people I know in the last three years, so I had high hopes those people would fix the situation with Intel's OpenGL implementation and get things back on track there, but so far not much seems to have changed with their integrated graphics products. Maybe all of the best people are working on Larrabee, I hope so for Intel's sake...

The real outcome was that a gigantic swath of that market was lost to cheap x86+graphics card systems before Itanium even made it out.

In this instance I'm not even referring to the graphics side of SGI, they had basically folded their efforts there even before Itanium really got going. When they renamed the company from "Silicon Graphics" to "SGI", that was already pretty much a done deal.
Part of the reason the cheap x86 hardware won was that they didn't hit the thermal wall like Itanium did. Itanium was great for FLOPS-oriented apps for a time, but in many other respects it lost out to much lower end Xeon type hardware, particularly in terms of server density. The space/power/cooling issue really came to a head when Itanium was supposed to be getting its legs. I think that's one of the reasons it has fizzled for so long.
 
Back
Top