Larrabee delayed to 2011 ?

I'd like to take this moment to copyright the term VD before someone else takes the encentive... Yep, virtual disease.

This just in... Whaddaya mean VD is already taken?!?
 
It's a pretty long article. I have never seen such long love-letters, even from Charlie. :)

What ever he's writing better be true, otherwise his credibility will be worse than charlie. Charlie gets his stuff right, at-least of late, dunno if it is due to nv's fuckup's or better journalism on charlie's part.

A 2011 launch is definitely not good. This is giving nv and amd vital breathing space in which they can learn and design more efficient gpu's. Intel has clearly blown the small window of opportunity here.

This is beginning of Q4-2009. 13-18 months from now puts it in the Q4-2010-Q2-2011 range. I suppose then we'll see a 64 core lrb on 32 nm in 2011 then.

The important tidbit is that the ring-of-rings that was proposed to maintain cache-coherency is apparently nuked. Is it just me, or others also think that full-coherency doesn't scale?
 
I am always struck how these guys are completely unable to do research on their own. For them, research is exclusively asking other people who they presume ought to know something. Actually looking at public information in addition to that is unthinkable; why would you ever do that? It's a serf's job! (this is much more applicable to Fudo and Theo than Charlie, although even for him I'm not convinced the ratio is optimal).

The claim that a "PowerVR core at 4 GHz could be powerful enough to start competing with low end and mainstream parts" is especially hilarious.
 
I won't go too much into the article. I don't have any insight into whatever drama may have existed with contractors and tax schemes or an airliner.

That there appear to have been troubles in development, I think most observers would agree.

There are some glaring technical pionts of the article that conflict with already disclosed data, which without elaboration on the author's part shows dubious understanding of the topic.
It's like seeing a baseball reporter blanking out on what an ERA is.

As an example, I've not seen any indication that Larrabee uses AVX, and every single time the article says Larrabee does, my suspension of disbelief is shattered.

Also, there were some questionable statements about our integrity, but here at Bright Side of News* we are going to continue doing what we did in the past - disclose the information regardless of how good or bad it is. We hope that it is good, but if it's not - don't expect us to stay put.
Maybe I'm old-fashioned, but this really doesn't seem to be a good way to operate.

On another note:
http://www.anandtech.com/video/showdoc.aspx?i=3651&p=7

Larrabee is in rough shape right now. The chip is buggy, the first time we met it it wasn't healthy enough to even run a 3D game. Intel has 6 - 9 months to get it ready for launch. By then, the Radeon HD 5870 will be priced between $299 - $349, and Larrabee will most likely slot in $100 - $150 cheaper. Fermi is going to be aiming for the top of the price brackets.
Anandtech's graphics coverage has appeared a little wobbly in some respects, but I am curious about this point.
There's been a halo of possibly contradictory quotes about Larrabee's target range, and what chip is supposed to come out. We've had Intel execs promise mainstream penetration, then balls-to-the-wall performance, then power-efficiency.

Now Anandtech gives this little tidbit of marketing positioning.
I have some difficulty imagining that huge 32 core chip we saw photos of potentially sliding into the same price bracket as Juniper is right now.
Just how wobbly should we consider Anandtech's grasp of the topic?
 
Well there have been quite a few changes regarding his stance for IMG; wonder who had his fingers in that one.
 
That really depends on the microarchitecture of the design.

David

Yeah. But you were in the room when I asked that question. :) Why didn't you bring it up then? :)

Seriously though, I asked about the microarchitecture, and a power efficiency based on implementing the ISA, not a specific CPU.

-Charlie
 
Yeah. But you were in the room when I asked that question. :) Why didn't you bring it up then? :)

Seriously though, I asked about the microarchitecture, and a power efficiency based on implementing the ISA, not a specific CPU.

-Charlie

My point is that the overhead is very different for something like Atom vs. Nehalem, vs. Sandy Bridge vs. something else.

David
 
I wonder why their splatting implementation sucks so much, there is nothing inherent in what they are doing to explain why there should be a difference in the rendering between raycasting and splatting.
 
Indeed, there is nothing inherent in splatting to match the quality of volumetric ray-casting (VRC) as soon as the sampling-density is high enough. It is true that to match (VRC) splatting needs slightly lower sampling density but savings in number of samplings does not compensate the computational overhead. The statements above just represent the result of my experiments with splatting and VRC clearly it does not preclude someone from having more luck with splatting. Still I'm not aware about splatting implementation which matches the best VRC; if you know such example please provide the reference; I mean not just article with claims but verifiable reference I may run on computer.
 
Not saying it would be faster, just strange that it had such huge artifacting.

Although I wouldn't call the subvolume rendering used in that paper (and Voreen) pure raycasting anyway. It's hybrid image order/object order.
 
Back
Top