GPGPU and 3D luminaries join 3D graphics heavyweights

B3D News

Beyond3D News
Regular
A couple of superstars in their own spaces have made professional newsworthy leaps. GPGPU pioneer and attractive hire for both companies we're about to mention, Mike Houston, and long-time (Direct)3D heavyweight, Tom Forsyth, have both taken up key positions at two 3D graphics giants.


Read the full news item
 
Whee, "The Graphics Adults" are continuing to mosey on into Intel VCG.

The "street cred" quotient over there is starting to look like critical mass. It's got to help in hiring a full team of devs when you have some names like TomF and Matt Pharr around. . . .the mid-level guys they need to hire a bunch of to actually make their efforts go are more likely to feel like this project is for real, and not just a likely waste (even if well recompensed) of two or three years of their career with those guys on board.

Interestingly, TomF's blog includes a short review of Deano's Raytracing vs Rasterization article, and TomF agrees with him. Conjure with that.

And congrats to Mike & AMD (for landing him), of course!
 
"Luminary", "superstar" eh? I'm not sure I'd go that far, at least in my case, but I hope that I have helped push the field of parallel computation, GPU architecture, and GPGPU. Don't worry, I'll still roam the forums here and I'll try to answer what questions I can and to the best of my ability.

Congrats on the Intel job Tom and good luck.
 
Interestingly, TomF's blog includes a short review of Deano's Raytracing vs Rasterization article, and TomF agrees with him. Conjure with that.

Additional Tom was one of the last guys in the PC business who had still works on software rasterizer (Pixomatic).
 
"Luminary", "superstar" eh? I'm not sure I'd go that far, at least in my case, but I hope that I have helped push the field of parallel computation, GPU architecture, and GPGPU. Don't worry, I'll still roam the forums here and I'll try to answer what questions I can and to the best of my ability.

Congrats on the Intel job Tom and good luck.

Congratulations to you and Tom.

Can you comment in any sort of general sence on what your vision is with AMD?
 
"Luminary", "superstar" eh? I'm not sure I'd go that far, at least in my case, but I hope that I have helped push the field of parallel computation, GPU architecture, and GPGPU. Don't worry, I'll still roam the forums here and I'll try to answer what questions I can and to the best of my ability.

This is why Bill Gates left Harvard early. It's easier to become a big fish in a big pond if you're first a big fish while the pond is small. :smile:
 
Tom has got to be one of the few people in the world who can manage 1000wpm when typing :)
 
:LOL: Apparently TomF has noticed that we noticed.

http://home.comcast.net/~tom_forsyth/blog.wiki.html See "Larrabee Decloak".

I do feel the need to add that while Beyond3D's ninjas are rightly feared far and wide, the reality is the fact that he was joining Intel to work on Larrabee was disclosed publicly by TomF himself on an internal page to his site. We just happened to notice, and being the big mouths we are, of course had to share. :smile:

This part is particularly interesting:

The SuperSecretProject is of course Larrabee, and while it's been amusing seeing people on the intertubes discuss how sucky we'll be at conventional rendering, I'm happy to report that this is not even remotely accurate.

This, of course, is where "street cred" comes in. Most people in the graphics world will take TomF at his word for that kind of thing.

So, I guess we'll see. At any rate, for those of us *not* on the inside, but graphics enthusiasts right down to our little high-tech socks, it is indeed a comfort to see some of the names that have been disclosed in recent months as contribuing to the Larrabee project.

What does the fact that TomF both seems to agree with Deano's analysis of Raytracing vs Rasterization, and assures us that Larrabee won't suck at conventional rendering actually mean? Well, it would seem to mean that Larrabee is about a good deal more than Raytracing performance. I'm sure we'd all like to hear more about that as we manage to tease and needle details bit by bit out of Intel and friends. :p
 
The fact that Intel is still hiring sw rasterization experts make me at least wonder what those guys are cooking..
 
On the one hand, my personal philosophy is to not expect much from revolutions in the making when their architects are in the "figuring out what to do" phase.

On the other, my earlier skepticism concerning Larrabee's being inferior to dedicated GPUs rested in part on the assumption that a next generation of an AMD/Nvidia high-end chip wasn't just a reinterpretation of last year's (or just no high-end at all, depending on the manufacturer).

I still expect growing pains for Larrabee I, though its successor should do better.
GPGPU still looks to set to be marginalized, though consumer graphics seems like a tough nut to port over to Larrabee in a time frame of only a few years.
 
The fact that Intel is still hiring sw rasterization experts make me at least wonder what those guys are cooking..

What are you thinking, nAo? That they might do DX10 entirely in software other than texture samplers?

On the one hand, my personal philosophy is to not expect much from revolutions in the making when their architects are in the "figuring out what to do" phase.

Well, that's a bit of wisdom to be sure. My point above is I'm a little more comforted today than, say, when we published the Carmean presentation, about who is "trying to do the figuring out".


I still expect growing pains for Larrabee I, though its successor should do better.

That's always been a question for long time observers of Intel. . . will they have the sticktoitiveness to hang around and make rev 2 and rev 3 better, and in timely fashion.

There's a mountain to climb there, no question, and they can't climb it all at once. Everything we know about graphics history of the last 12 years or so tells us that conclusively.
 
The SuperSecretProject is of course Larrabee, and while it's been amusing seeing people on the intertubes discuss how sucky we'll be at conventional rendering, I'm happy to report that this is not even remotely accurate.
(Bold mine)
Now the question is.. what are they comparing Larrabee to? G80? G92? future theorical performance extrapolated from current one, taking in account the ETA for Larrabee?
 
That's always been a question for long time observers of Intel. . . will they have the sticktoitiveness to hang around and make rev 2 and rev 3 better, and in timely fashion.

There's a mountain to climb there, no question, and they can't climb it all at once. Everything we know about graphics history of the last 12 years or so tells us that conclusively.

Larrabee has a few things going for it that I feel should be sufficient to carry it through a revision or two.

CPU designers admit that the utility of symmetric multicore drops to near zero after 4 standard cores, outside of certain lucrative but still limited market segments.

Intel admits there is a need for different methods to further the usability of massively multicore architectures.
In that respect, the work put into Larrabee is work Intel needs to do anyway.

Larrabee's core design may also share the same design philosophy as Intel's Silverthorne core, and in-order 2-wide x86 (besides the vector unit). The core itself is simple enough that design costs are small, with the big difference being in the cache and uncore: areas Intel needs to improve anyway.

I think that means that a fair amount of Larrabee's expense is incremental to things Intel is doing anyway. Better, it's an attempt to get revenue on work that would otherwise not bear fruit for many more years.


The possible upside?
Larrabee's very likely to be a very strong competitor for GPGPU. In addition, it will compete in HPC in a number of areas that AMD's Bulldozer should have gone in (according to early slides showing it's prowess in HPC).

With one design effort, Intel furthers massive multicore design, hurts the revenues of GPGPU, makes some money in HPC and possibly graphics, and sucker-punches AMD's next-gen design effort on two fronts (Three, if we discover Silverthorne is distantly related to it and hurts Bobcat. Via gets nailed too).

Even if Larrabee fails in consumer graphics, it should be of interest to HPC. Even if it fails there, the other benefits would be enough that Intel could swallow a few weak iterations and still do well as a whole.

This might be an Itanium-type situation. As poorly as it did early on, the product is now profitable on an operations basis and it helped kill off several wobbly RISC lines in the high-end. POWER and SPARC are primarily the remaining RISCs that still exist in non-embedded or telecom. SPARC isn't growing and its new products are in a niche (one that Larrabee could also target...).
IBM is working seriously hard to maintain a leap-frog relationship with a design that is far more intensive on all levels of the system than Intel's.

Wouldn't a successor have to begin development long before the precursor was released, and would they have been able to identify weaknesses in time to be fixed?

The product should be taped out long before the successor's design is frozen.
They'll have some good ideas from running engineering silicon where some improvements could be made.
Larrabee II should also benefit from the software and driver snafus that might pop up for the design that first wades into the real world.
 
What are you thinking, nAo? That they might do DX10 entirely in software other than texture samplers?
IMHO it wouldn't be such a good idea, rasterization per se doesn't map well to CPUs.
Though I wouldn't be surprised if Larrabee has a hw rasterizer but not a dedicated setup unit, as it can be implemented in software (does Larrabee support double precision math? ...)
 
Larrabee should support x86, and some slides show it as a system processor, which hints at full support.

Slides on Larrabee indicate it should be capable of 8-16 DP operations per clock using SSE. I'm not sure why there's a range, perhaps it hadn't been decided when the slides were created or there's a different throughput depending on whether the code uses standard SSE or the expanded vector set.
Aside from that, it was stated to support 2 DP non SSE ops.

This extra support does point to the greatest internal threat to Larrabee: the everything and the kitchen sink syndrome that hit the IA64 Merced.

If McKinley is an indicator however, Larrabee II should come about after the "we can do anything" phase ends and designers can focus on what it can do well. This is where things go from pie-in-the-sky to interesting.
 
Back
Top