[Beyond3D Article] Intel presentation reveals the future of the CPU-GPU war

Discussion in 'Architecture and Products' started by Arun, Apr 11, 2007.

  1. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    No graphics? So it all starts to make sense..
     
  2. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,661
    Likes Received:
    1,114
  3. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Says who? ;)
    "It will be easily programmable using many existing software tools, and designed to scale to trillions of floating point operations per second (Teraflops) of performance. The Larrabee architecture will include enhancements to accelerate applications such as scientific computing, recognition, mining, synthesis, visualization, financial analytics and health applications."
    All fits for points Nvidia was trying to make with CUDA - except the word "larrabee" of course. ;)
     
  4. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    It can even scales to petaflops, this does not mean it's going to be good for graphics, you need much more that some zilion flops to be good at that.
     
  5. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Well, if you take a look at AnandTech yesterday, it seems like Intel is suggesting that graphics isn't what they intend it to be good at. And surely the slides we put up are very GPGPU focused.

    And yet Carmean is there, and the VCG page specifically mentions "extreme gaming" as one of their target areas. :???:

    The entire picture of Intel's GPU efforts remain a riddle wrapped in a mystery inside an enigma. . . :lol:
     
  6. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    I think Intel isn't entirely sure what would benefit and be a good marketing target, either. Just like us.

    I simply think that they don't have all that much (and good) plans about what to do with all that power, and are still trying to focus on x86 single-thread emulation. So I expect them to batch those cores together into smaller or wider virtual x86 cores, depending on the workload.

    I am really curious how they are going to handle the required consistency. Cache memory isn't going to help there. Very fast local buffers and interconnects that can be allocated to the core collection and store the states of finished ops might. Hence, those uops.

    Which does take it in the direction of Cell, but with a cache instead of relying on DMA, and with a local storage that is invisible unless you run in native mode. Which you might be able to allocate according to your needs. That would be a good win, if it turns out to be fast enough.
     
  7. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    If the area/performance hit for snooping is significant I'd rather they would just not allow replication at all for writeable pages or cache lines ... sure it puts a burden on the developer, but meh ... if you don't understand your program's dataflow you shouldn't be expecting good performance anyway.
     
  8. iwod

    Newcomer

    Joined:
    Jun 3, 2004
    Messages:
    179
    Likes Received:
    1
    Just why all of a sudden an 11th of April 2007 article has appear on the front page?
     
  9. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    It's clearly prefixed by 'Classic' and the description starts by 'just about one year ago'. The point is we haven't had any new article since February and there's no point constantly pointing at the APX 2500 and ATI Linux Drivers articles - so I changed that to three good articles that might remind some people of what happened in the last year or so. What's so bad with that? :)
     
  10. dkanter

    Regular

    Joined:
    Jan 19, 2008
    Messages:
    360
    Likes Received:
    20
    Larrabee is definitely targeted at graphics. I think the confusion is that Intel hasn't figured out how to publicly set expectations.

    I think they want to avoid saying "we're going to beat NV and ATI" since it's clear that if they can get within 20% of the performance of contemporary GPUs it'll be a huge success technically.

    DK
     
  11. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    What makes you think Larrabee is going be handled with kid gloves?

    Perception of other graphics products with slimmer performance shortfalls have been much less forgiving.

    A 20% shortfall in games where contemporary cards struggle to maintain a consistently playable framerate would be enough to declare Larrabee unusable.

    If it's a 20% shortfall on average, well even R600 didn't do that badly in most games.
     
  12. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    20% is a lot but there's always room to learn from your own mistakes -> Larrabee 2 :)
     
  13. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    That would be my expectation, but I'd hope for better results on the first outing.

    Larrabee by that time would be a design at 32nm while the best the GPUs could hope for is possibly a 40nm half-node, with or without High-K and metal gates I'm unsure.

    Even failing that, we can already assume circuit performance will be something TSMC wouldn't be able to match for another couple years afterwards, if all the stars align.
    Power-wise, Larrabee won't be a lightweight, if some slides indicating something north of 150W are to be believed.

    That's an embarassment of riches I'd rather see produce more.
     
  14. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I read in one of the Larrabee articles that Intel aims for mid-end performance with the first generation, and if things go well, the successor should be able to get into the high-end.
    I suppose that makes sense, after all, they ARE the newcomer in this market. It took ATi until the Radeon 9700 to really get on top in the high-end market aswell, and ATi was far from a newcomer at that time.
     
  15. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    If that's would be a supposed 20% shortfall compared to the fastest GPU at the time, Intel should be chearleading if they can with the first strike yield as much as R600 sales. Wasn't it roughly in the 10% region for the high end segment for 2007? That's a whole damn lot for an IHV that enters for the first time the GPU market.

    Personally I wouldn't give a rat's ass about performance; Intel would in the above case adjust the price to that thing's performance anyway. What would worry me personally most would be driver stability/compatibility as of course image quality.

    The common mistake unfortunately many make is to concentrate on the price/performance ratio of a GPU. Actually it should be price/performance/IQ and since I'm getting more and more sensitive to various optimisations through the last years: 100 fps with ultra optimized AF isn't really "20% faster" than 80 fps with true high quality AF.
     
  16. apoppin

    Regular

    Joined:
    Feb 12, 2006
    Messages:
    255
    Likes Received:
    0
    Location:
    Hi Desert SoCal
    does anyone really believe intel

    it all looks like PR to me .. i don't think intel is capable of doing Graphics unless nvidia helps them like ATi did with AMD. And that will never happen now. I think RT and larrabee won't make a dent in graphics for a decade. By then they will have just caught up to where AMD and NVIDIA is now.

    To me it appears to just be smoke and mirrors by the same PR team that brought us "NetBust 10Ghz or Bust"; this looks like a move to just discredit NVIDIA and buy time for intel to really "come up with something"
     
  17. dkanter

    Regular

    Joined:
    Jan 19, 2008
    Messages:
    360
    Likes Received:
    20
    Intel does have some advantages relative to AMD and NVIDIA. But the reality is that Intel's process technology is in no way comparable to the benefit of say...10 years experience designing graphics hardware.

    The first time you do ANYTHING, you are bound to screw up and make mistakes. The best you can hope for is that the mistakes are minimized or can be somehow hidden (hurray for a driver!).

    I agree that Intel's silicon is vastly higher performance than TSMCs, but they also have VASTLY more restrictive design rules, which has implications on die area.

    Also, I expect Larrabee to be 32nm about 1 year behind when the first 32nm products ship, so in all reality, NV and AMD will have access to 32nm as well.

    DK
     
  18. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    dkanter: Agreed, that's reasonable. On the other hand, it's worth considering that many of the software guys aren't exactly new to graphics, and that the software portion will be much more important than in normal GPUs. So from my POV, the main risk is hardware engineers overestimating what software engineers can do and software engineers not taking certain hardware efficiency metrics into proper consideration until it's too late. Whether that will actually happen is anyone's guess, however.

    Regarding density and performance - I am seriously expecting the 4Q09/1H10 Larrabee to be on 32nm. I said it and I'll say it again: if it's 45nm, Intel doesn't need to bother even releasing the thing, they'll just look dumb because they'll be at a noticeable process *disadvantage* given what I've seen of TSMC's 40nm process so far (in terms of (perf/mm²)/$). I'd easily describe 40nm as the biggest step in TSMC's process technology since 130nm if they deliver. Of course, given what happened at 130nm... well we'll see! :)

    Anyhow, as I pointed out in my news piece yesterday, TSMC is now releasing 32G in 4Q09, instead of 1H10 - while 32LP was moved to 1Q10. So given that 40G was released in 2Q08, that means 32G chips will likely come out ~18 months after that. I'd estimate Q310, in time probably for the Winter OEM cycle for part of the line-up.
     
  19. aca

    aca
    Newcomer

    Joined:
    May 4, 2007
    Messages:
    44
    Likes Received:
    0
    Location:
    Delft, The Netherlands
    There was an interview with David Kirk on bit-tech, discussing CPU's GPU's, couple of days back. Basically, the same points that I've seen here come by are mentioned in the interview.
     
  20. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Do not compare Intel with AMD, please.
    Intel is a FAR bigger company, which has developed FAR more technology and has been successful in a lot of areas.
    Will all the resources that Intel has, they can surely make this work. Getting back to your Netburst... no, it was not a very efficient design... but because Intel had a huge advantage over AMD in terms of manufacturing technology, they still made the design work, simply by brute force.

    Larrabee could be a similar scenario... It might not be as efficient as nVidia's GPUs, but if they run at much higher clockspeeds, with more cores, cache and all that, then it might just work.
    Raytracing is ofcourse nonsense at this point, but that's another story.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...