Larrabee and Intel's acquisition of Neoptica

B3D News

Beyond3D News
Regular
On October 19th, Neoptica was acquired by Intel in relation to the Larrabee project, but the news only broke on several websites in the last 2 days. We take a quick look at what Intel bought, and why, in this analysis piece.

Read the full news item
 
In addition, there is always the possibility that as much as Intel loves the tech, they also love the instant 'street cred' in the graphics world they get from picking up that group of engineers.

Yeah, I must say my first thought was "well, well. . . 'the graphics adults' have arrived on the scene at Intel. Thank God."

:LOL:
 
Any indications that there were any other acquisitions/hirings Intel made for the sake of the Larrabee project?

The graphics one would have generated more news on the net because it's graphics. Other hirings in other fields may have not garnered as much attention.

If the acquisition mode has focused on graphics for Larrabee, there shouldn't be many other such additions. Otherwise, this could be some kind of cyclical thing where the group goes on a graphics kick for a month, then a simulations phase, then a data mining phase, and so on.

If the focus is not yet evident, then the project still hasn't decided on a problem for its solution, though heading off GPGPU seems to be best served for pretty much any acquisition related to HPC.
 
Hmm.. Intel saying "well. yeah.. that's nice Microsoft.. but we'd like to pass on your .. direct X thingy there"

Whatever they're going to do, rafting upstream requires a low-cost product. not something that Intel is fond of..
 
I should have been more specific about what I was curious about.

I should have asked whether or not Intel has been gathering employees for non-graphics applications for Larrabee.
 
Given that only the PS3 and, to a lesser extend, the XBox360 have a wide bus between the CPU and the GPU today...
A CPU with 16 PCI Express 2.0 lanes has 16GB/s aggregate bandwidth to a GPU.

Jawed
 
A CPU with 16 PCI Express 2.0 lanes has 16GB/s aggregate bandwidth to a GPU.

Jawed

Even when we see an integrated PCI-E bus on the CPU, I'm curious about the latency effect with serious back and forth traffic between the CPU and GPU, thanks to the physical trip on the bus and the serialization/deserialization delay of PCI-E.

How current setups deal with the PCI-E bus seems rather coarse, compared to the more involved communications being talked about in the story.


I've seen mention of it, though not that specific pdf.

I'm more curious about the activity of the organization that is working on Larrabee, such as its hiring patterns and other factors that might give a hint on how focused or non-focused the development effort is.
I'm hoping to glean from this how focused the design effort really is.

As a historical example, Merced was a victim of broad design targets that left it hobbled after a forced "die diet", and it took several designs before Itanium started to hit its more modest performance goals.

What Larrabee will achieve upon release will be determined by what has already happened or is happening now.
 
Even when we see an integrated PCI-E bus on the CPU, I'm curious about the latency effect with serious back and forth traffic between the CPU and GPU, thanks to the physical trip on the bus and the serialization/deserialization delay of PCI-E.
Intel's plans seem to have a GPU and PCI Express arriving on the CPU at the same time. So PCI Express won't necessarily be relevant, at least once the "GPU" is Larrabee.

Also, I think Neoptica's concept of a "software rendering pipeline" must have implicit within it the hiding of latency between CPU and GPU.

It might be better to ask how close was Neoptica to getting a ray tracer running jointly on CPU and GPU, before being subsumed... Is that a meaningful path?

Jawed
 
Intel is on a roll when it comes to buying up any possible consumer-level routes for GPGPU, such as Havok or the possible future consoles Neoptica may have targeted.

Maybe Intel is going to junk the GPU part, like it's dumping Havok FX.
 
IMHO Intel try to buy any knowledge they can get about implementing a modern shader rasterizer on a multicore x86 like processor. There are not many people out there that have worked lately in this area.
 
It really does seem like Intel feels a real threat from GPGPU.

I guess they think trying to delay the initial commercial uptake of the GPU as a computation platform, Larabee will look like a more attractive alternative when considering its x86 roots. At the same time, the technology of these acquisitions would promote HPC in general.

Imagine if the BluRay camp had a means of delaying HD-DVD's launch and price reduction timeline by 6 months. Not only would they lose a big chunk of the cost difference appeal (particularly for this holiday season), but they may not even be able to convince those studios to switch. Long term outlook would have been bleak.
 
I’ll make a stronger argument here: we have been living with relatively primitive languages because the art of programming language design is still in its infancy. What we are seeing in modern languages are early evolutionary branches that are vehicles for technology development essential to languages that are designed for parallelism (and safety, reusability, fault-tolerance, adaptivity, etc.). What might seem to the C or C++ developer an esoteric field might profoundly affect how she is programming in two years.

Anwar Gholoum's blog entry of today planting seeds for a "new and parallel programming language."
Is this what you're looking at 3dilettante? sounds like a piece written with confidence and hints of current developments.
 
It does point to a future where Intel can sidestep current APIs, and by extension it is working to cut GPU products off.

GPGPU has lost Peakstream to Google, GPU physics in games has lost Havok FX, or at least AMD seems to be rolling over because of it.

It doesn't offer much evidence that the Larrabee organization's heart is in consumer graphics, but that seems to be how most everyone is reading the situation.
 
It doesn't offer much evidence that the Larrabee organization's heart is in consumer graphics, but that seems to be how most everyone is reading the situation.


I doubt it though.. getting Larrabee to work under anything we have now would be futile.

I guess the most logical explanation would be a computational device with an advanced software based rasterizer used for medical, engineering or entertainment purposes.

The blogs seem to follow up, so take the one of October 18th.
Hundreds of GigaFLOPs are available in your PC today….in fact, you might even have a TeraFLOP in there. As someone who cut his teeth on a Cray C90 (15 GFLOPS max), this is an intriguing opportunity to dabble; for the latter-day high performance computing programming (whether you’re trying to predict protein structure, price options, or trying to figure out how to thread your game), it is almost too tempting to ignore. However, like a shimmering, unreachable oasis, today’s GPUs offer the promise of all the performance you require, but achieving that goal for all but a few applications (notably, those they were designed for: rasterization)is elusive.

the word rasterization links to another blog article of him (October 10)
However, Daniel’s Quake IV demonstration required no video card interaction from the GPU, and instead only used the video card to send the image to the monitor. This is because Daniel’s demo system had eight x86 cores, a configuration that is destined to become mainstream in a few years. And, because the ray-tracing algorithm scales so well with CPU cores, it doesn’t need the assistance of the GPU in order to get the same performance.
..

Research is going on today that will enable all of these special effects in real time. And in a few years, CPUs may even have the core counts and capabilities to enable effects such as Global Illumination - long sought as the Holy Grail of real time rendering. We think we can make it happen soon, and anyone who is interested should keep close attention on future Intel Developer Forums, where we intend to keep the public aware of our progress

It's like following Dave's hints.. Why the hell would intel develop a raytracing tool that just runs on their cpu's without making extra money, somewhere?
 
I doubt it though.. getting Larrabee to work under anything we have now would be futile.
That's what I said, though it was rather convoluted.

To reword what I stated:

It doesn't offer much evidence that the Larrabee organization's heart is in consumer graphics.
That lack of focus on consumer graphics is how most everyone is reading the situation.


I guess the most logical explanation would be a computational device with an advanced software based rasterizer used for medical, engineering or entertainment purposes.

The blogs seem to follow up, so take the one of October 18th.

the word rasterization links to another blog article of him (October 10)


It's like following Dave's hints.. Why the hell would intel develop a raytracing tool that just runs on their cpu's without making extra money, somewhere?

Intel's PR with regards to actual graphics work through raytracing is rather poor compared to the rigor put behind other applications of Larrabee.

If we're going by what Intel's blogs say on raytracing, realtime graphics that is up to snuff with dedicated GPUs is still an afterthought.
 
I think Intel hired those guys ( and the Havok ones http://www.fool.com/investing/value/2007/09/17/intel-wreaks-havok.aspx and the Ct laguage team) to make the Larrabee's API.

Perhaps gonna be DX and OGL compatible + use that API for things not yet implemented in DX ( for example, raytracing or physics )... although I would prefer just to use Ct to program my own raytracing or physics.

One thing is clear... Intel is moving chess figurines before launching the serious attack!
 
Even when we see an integrated PCI-E bus on the CPU, I'm curious about the latency effect with serious back and forth traffic between the CPU and GPU, thanks to the physical trip on the bus and the serialization/deserialization delay of PCI-E.
Compared to the pipeline depth of ye average GPU how is that relevant?
 
Back
Top