Larrabee at GDC 09

Lemme just copy/paste what I posted on gaf:
---
You guys can get the 1080p wmv from either Project Offset's site (the "GDC'09 Meteor" caption under the thumbnail points to it), which is extremely slow right now, or from Intel's site (click on "Download" below the video), which is fast-ish.

Looks very good (I'm not sure it's native 1080p though), you can see the motion blur... errr.. subtly blurring the scene sometimes, lighting is awesome, sound is fenomenal, the framerate seems to never ever drop (video is encoded at 30fps)... if this is actually running on some sort of LRB-beta, I'm in! :)
---

Now, I'd expect the B3D crowd to be over-analyzing this right now. Perhaps a dedicated thread to the engine/game would be in order, because most of what I just mentioned could rapidly become OT in this thread, but we have little info so far, and I'm not sure where to create it (as a game, on the PC Games forum, as an engine, I'm not sure where..), so I'll leave it to the mods' criteria. :)
 
With Microsoft developing the x64 OS'es on AMD hardware, I'm happy they chose the superior implementation. :D
I highly doubt that they don't use a mixed base of development machines to make sure everything works equally well, especially considering the much higher market share of Intel.

Also the implementation of AMD is not superior, but slightly different. Nothing a normal programmer that doesn't write kernel code should care about.
 
I highly doubt that they don't use a mixed base of development machines to make sure everything works equally well, especially considering the much higher market share of Intel.

Also the implementation of AMD is not superior, but slightly different. Nothing a normal programmer that doesn't write kernel code should care about.

wasn't talking about now, but when the first XP and 2003 x64 came out there wasn't much "intel X64" to go by.
Also their switch in 2005 was supposedly AMD only:
http://news.zdnet.com/2100-3513_22-142581.html
 
I assume that everyone at GDC is too busy taking notes from Michael Abrash and Tom Forsyth's presentations, so they don't have time to post. :)
 
Abrash contends that Larrabee’s performance will be above 1 teraflop, or a trillion floating point operations per second.

[...]

Abrash said that Larrabee isn’t likely to be as fast at raw graphics performance as other graphics chips, but it is power-efficient and flexible.
Reality-check or ...?

I'm struggling to believe it'll be as slow as "above 1TF", implying substantially less than 2TF.

Jawed
 
For 2 TFLOP you'll need 32 Cores and 2 GHz, which should be doable with 32 nm.
Maybe they are trying to hide the real number of cores and their clocks. Or we will see nice presentations that their FLOPs are better than Nvida's/AMD's FLOPs.
 
Has any body taken any pics from the presentation or has any idea if the slides will be posted online? I really wanna know what Abrash and co had to say.
 
1TF at sp seems pretty low to me. rv770 does 1.2T peak (0.96T of you don't count the t unit) and the dx11 gen is only going to raise it to far higher values. And since LRB needs all the compute power to do stuff which is separate for normal gpu's such as rasterization, primitive assembly etc. , it seems like it's going to be underwhelming.
 
Reality-check or ...?

If you leave flexibility aside in his sentence (which I have no problem believing given the nature of the architecture), its graphics performance is tied to power efficiency. They probably could have set the performance target way higher then the competition but the resulting power portofolio there would have been a disaster.

I'm struggling to believe it'll be as slow as "above 1TF", implying substantially less than 2TF.

Jawed

Floating point numbers can tell us exactly what about any architecture, without knowing other critical aspects about all the other parts of it?
 
They can do more than 1fmac/cycle if they build sufficiently wide alu's. By this I mean that since alu's are pipelined, if they build 2 vector alu's per core, they still could dual issue a fmac instruction. Whether they do it, is an entirely different matter.
 
Reality-check or ...?

I'm struggling to believe it'll be as slow as "above 1TF", implying substantially less than 2TF.

Jawed
I find the quote really interesting indeed. Most people didn't expect larrabee to perform as well as top of the class GPU so it's not really a surprise. I find the part on power efficiency interesting. It's quiet a surprise if Larrabee is more power efficient than what ATI offer for instance. The next mid range ATI is touted as consuming around 80 Watts so Intel plans to be more power efficient than that. It's also really interesting in regard to Intel will to enter the console market if they want tot succeed they need that kind of characteristic.
This is the most enlightening part of the interview (in regard to hardware) imho.
Low power consumption give us some hints (and more knowledgeable members as you ;) will read more into it than me). My take is that Intel follows somehow on the same path/logic as ATI. I may read to much into Abrash quote but it could be tha Intel doesn't want to ship a huge costly too hot low yield (that's quiet a lot :LOL: ) chip.
I'm not sure than they couldn't deliver 32 cores @45nm but it looks like as ATI they give up on big chip for anything / super profitable server processors. They may want larrabee to be around the same size as theirs huge two cores CPUs. Intel may aim at dual chip board for real high end?
I wonder if Intel could have aim at lower transistor density to be able to achieve cooler clocked higher larrabee. Abrash's figures are a good match for a 16cores larrabee running @ 2GHz and more.
Low power consumption could also help in the HPC GPGPU realm.
But I agree by early 2010 this figures will look bad so multi chips GPU sounds like their only choice till 32nm mass availability.
 
Back
Top