Intel discrete graphics chips confirmed

Discussion in 'Beyond3D News' started by Tim Murray, Jan 22, 2007.

  1. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    My thoughts:

    Pros:
    At least a discrete solution gets around the bandwidth strangulation AMD's Fusion chips will suffer from.
    It runs x86 code.
    Intel seems to indicate that the same tools and software models will be used.

    Cons:
    It runs x86 code.
    Intel seems to indicate that the same tools and software models will be used.

    There's just not enough info, but it doesn't look all that promising for standard rendering, and may be too unfocused to be compelling for any alternative methods.

    I'm surprised at how Charlie seems to be spinning the idea that Intel settled on this as an answer for the GPU because the company in effect didn't know what else it was an answer for.
    Usually, being uncertain about it is you are doing isn't a good thing.

    The fact each core is x86 isn't really that big of a bonus. There's just so much baggage attached to that.
    At first I thought maybe they could streamline the cores by dumping all the extraneous interrupt handling and all those extra instructions, but can a core support x86 properly without all that extra stuff?

    Then there is the fact that there are 16 such cores. That's 16 decoders and front ends. They're multithreaded, but are they threaded in the same fashion as a standard x86 CPU? That's a bit heftier per-thread than a G80's threading.

    Being separate cores, they are relying on a wide ring bus. It's not unheard of to have fast vector processors on a ring bus, but there's no info yet on what is done to keep 16 cores happy. It's such a general solution that it may very well not be good enough in any particular discipline.

    What exactly is this vector engine strapped to each core?
    How much better would this be than the ALU array(s) on a GPU?
    Is it gifted with a full GPU ISA? Can it do all those nifty filtering and interpolating functions x86 doesn't do?

    If so, why bother with the x86 stuff?

    After all the work being done to decouple texturing units from the ALUs by Nvidia and ATI, Intel is forcing the operations together again. The division seemed to work out fine before, why does Intel think it doesn't need to worry about this?

    I'm getting the feeling this is going to be a solution that has broad appeal as an idea, but that everyone in the set it appeals to is going to say "it would have been awesome, if only they had done *insert some feature omitted for generality here*, and if only it didn't have all of that *insert crap each task doesn't need but gets slowed down by anyway*".
     
    AlBran likes this.
  2. Carl B

    Carl B Friends call me xbd
    Moderator Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63
    Speaking directly to 'short-pipeline,' I took it to mean in the +/- ~10 stage range.

    Well I guess this steps into whether this particular solution is being aimed at the graphics market, and simply rides on the coatails of x86 development, or whether it's a general solution targeted towards situations outside of gaming as well. I know that may seem a little strange to differentiate between, since whatever the case, this thing's going to be 'general' to a fair extent... but I simply got the impression that this was a solution with a partcular focus in mind; this in spite of the seeming similarities you point out to architactures like Niagara, which I agree it does seem similar to in some ways.

    Well, that's just it though; it depends on what we mean by 'graphics.' The information we're dealing with seems to imply a move from the world of rasterization as we know it towards something... else. In that context, it's hard to either claim it supreme or to write it off completely. Whatever the case, and whatever happens with Intek, I think we can all agree that we're living in a very exciting time for architecture divergence - and convergence - both.

    I want to add that this is all discussion in the hypothetical as well. The fact is that since it comes from the Inquirer... and Charlie D to boot... there's a 80% 'BS Factor' we have to assume as given.

    Personally until this possibility was raised, I had always assumed that it would be sort of a discrete 'macro' effort revolving around their traditional pushes into PowerVR-esque solutions in the integrated market.
     
  3. Techno+

    Regular

    Joined:
    Sep 22, 2006
    Messages:
    284
    Likes Received:
    4
    I think the x86 compatibility is there so as to make this product a replacement for both the CPU and GPU, maybe?, but still 16 cores for gfx is weird, unless one mini-core is made up of a mini-CPU+mini-GPU.
     
  4. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    If I had to take a guess it's two major reasons:

    1.) x86 everywhere has become a motto for Intel much like Nvidia wants to drive every pixel. Not to mention this setup looks like it'll make an awesome competitor in any market that Niagara or Cell like devices will attempt to enter.

    2.) x86 has the smallest number of GPRs for any current CPU ISA, that I'm aware of, and this likely makes it among one of the best suited for implementing a large number of threads. And large number of threads makes it more suitable to be used as a GPU.

    So you have the public x86 ISA which is incredibly widespread, and it also happens to be fairly well suited to multi threaded hardware (compared to other CPU ISAs).
     
  5. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    I've heard many of the same things that the Inquirer and VRZone have reported. The only thing that is new to me is that it is in fact in-order and the core interconnect info. While conjecture that it isn't meant to be just a GPU more or less originated here at B3D.
     
    #85 Killer-Kris, Feb 9, 2007
    Last edited by a moderator: Feb 10, 2007
  6. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    The CPU as GPU has been done before by Intel. The i860 didn't make a splash as a CPU, but it did find some use as a graphics accellerator.

    However, the chip can't be both a discrete solution and a CPU replacement. Not being discrete would cut bandwidth, and being discrete and mounted on a board would keep it from being a socketed CPU.

    They could try two product lines, I suppose, but there would be far superior x86 processors by that point.

    They could go heterogenous and have array of 8 x86 vector cores and 2 x86 primary cores, but there would have to be a better bandwidth solution than is present currently for socketed processors.

    Intel looks like it wants to bring raytracing or a non-rasterizing method to the fore, but I don't think they can force that paradigm so easily.

    How about some opinions on that?
    I don't think 16x G80's math capability being devoted to ray-tracing can match the polish of a future rasterizing GPU, which is likely to be just as powerful or more so in terms of units.
     
  7. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    I don't think any developer would want to give up OpenGL or Direct3D and all the things that those APIs provide. So if Microsoft supports what the developers want, I don't see any way for Intel to drag them away from that, kicking-and-screaming or any other way.

    I have a hard time accepting 16x G80's math capability, Moore's law just isn't going to allow for that, assuming 32nm and playing fast and lose with the shrinking we're looking at just 4x as much logic to play with, and I really doubt another 4x on the frequency to get up to the 16x.

    As for the ray-tracing, I have no clue. :)
     
  8. Techno+

    Regular

    Joined:
    Sep 22, 2006
    Messages:
    284
    Likes Received:
    4
    AMD's and Intel's approach for a CGPU is differetn, AMD wants to make it on die and then on core, while intel wants to straightly go for full integrtaion, 3dillentante, geo, Uttar or anybody else, can anybody tell us which one is more appropriate, and why?
     
  9. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    That depends how much of the die is dedicated to the ALUs on both chips, though. TBH, I think people are quite overestimating that for G80, but that's hard to be sure of right now.
     
  10. DavidC

    Regular

    Joined:
    Sep 26, 2006
    Messages:
    347
    Likes Received:
    24
    Intel will probably not put their GPU on the latest process. Their CPUs do, but everything else that's not the latest desktop/laptop CPU is on the older process technology. The Inquirer article talked about 65nm sample being targeted as late 2007 before delay, which is same as what chipsets are on.

    1/2, 1/4 core for same performance?? Well same was claimed for IA64. 2x core for same performance is being the truth now.

    Then there is the drivers. They can't get the GMA X3000 drivers right, can they get right on a entirely new design?? In my opinion it'll be more productive to expand their X3000 so drivers can mature rather than creating an entirely new one.
     
  11. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,153
    Likes Received:
    483
    Location:
    en.gb.uk
    Interesting thread over at RWT on this subject

    linkage
     
  12. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    I definitely don't understand large corporations reasoning sometimes. If a GPU was what they were really after, we all know what they have a license to. I get the impression that the license fee, plus what other time and money are required would still cost substantially less than Larrabee and/or X3000, and on top of it they'd have available a respectable integrated and discrete graphics chips with drivers that actually work. Instead you see a not-invented-here attitude and products that don't exactly shine.
     
  13. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    Intel has been the king of "NIH" for as long as anyone can remember.
     
  14. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    While some of the points are interesting, I'll have to disagree completely and utterly with the overall picture portrayed. A very wise man once said that there are two fundamental kinds of architectures, and that no core can simultaneously be good at both. The first is massively parallel and latency-tolerant, while the second is focused on serial workloads and is less latency tolerant; thus, it must reduce latency through caching. Intel's approach in Larrabee looks to be very much hybrid. Until proof of the contrary, it fits very cleanly in the "Jack of All Trades, Master of None" category. It would have plenty of disadvantages, but also some advantages, including slightly more potential for incoherent workloads and some more inherent flexibility.

    Extending the massively parallel paradigm towards higher performance in incoherent scenarios will likely be one of the key research subjects of the field in the coming years, I suspect. Please note that I am hereby only refering to the notion of computational coherence, not memory fetch coherence. The latter is a fundamental aspect of modern computing, and may only be effectively minimized through caching or algorithmic innovation. Anyway, it remains to be seen if there is even any "good" answer to the coherency problem; I suspect there is, but I'm not ambitious enough to predict a clear answer to it! :)

    As for Larrabee, it will be interesting to follow no matter what. If it is fundamentally what Charlie is describing, even with extra special-purpose units for things like texturing, then I personally believe it will only be applicable in a niche. I can see where Intel is coming from though; they need to find a good way to go forward, and right now, that's probably what they believe is their best shot. If it is as described, I personally believe it will mostly turn out to be a strategic mistake. But it might not be quite like what Charlie is describing, or even if it is, I might be wrong. We'll see.
     
  15. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    You would expect, on the contrary, that Intel is more willing to improve their relationship with Imagination Technologies today than ever before. Consider that it looks like they are making a risky bet with Larrabee - and multi-billion dollar corporations don't tend to do such things without a proper backup plan ;)
     
  16. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    So the VR-Zone article no longer makes any mention of being in-order, and I could have sworn that they mentioned something called run-ahead execution. My first thought was that it is similar to Hardware Scout, might this be a good way of masking texture fetch latency in a CPU like architecture with out tons of threads?

    And in a similar vein to Arun's mention of incoherent work loads, what impact on a graphics workload would there be for a processor which uses a branch predictor which means it always mispredicts on the last iteration? I know a short pipeline will minimize this impact, but that's still 10 or more instructions wasted perhaps per pixel.
     
  17. Killer-Kris

    Regular

    Joined:
    May 20, 2003
    Messages:
    540
    Likes Received:
    4
    Then again Intel has had a recent problem with not having backups ready in any reasonable length of time when they were certain their large investment was the correct way to go, take Itanium and P4 for example.
     
  18. Carl B

    Carl B Friends call me xbd
    Moderator Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63
    Does it matter though in the GPU space? I see Intel's push in GPUs as similar to their prior (and later aborted) push in LCoS for the CE/TV space. All upside, with downside only in R&D - of which Intel has seemingly unlimited funds to devote to anyway. P4 and Itanium are mistakes on a more fundamental level, costing billions directly (Itanium) and indirectly (P4 marketshare loss), and having been executed as misteps in Intel's core business.

    Frankly, I'm no fan of Intel, but I am a fan of exotic architectures... don't know why, I just am. :)

    So, I'm interested to see where this goes. As I mentioned before, from the descriptions given, it truly does not seem to target the rasterization paradigm, at least in its present form. I mean, if this is to be believed, it's fairly substantial:

    Anyway, either way NVidia's in a position where they are slowly trying to move an architecture that although already programmable on a level, is ill-suited towards GP tasks, and get closer to GP. Intel will be coming from the exact opposite direction, taking a GP architecture, retaining the core instruction set, and making dramatic hardware changes to achieve gains in specialized tasks.

    Maybe neither company will be able to displace the other from its place, but the efforts to do so should be interesting to watch. It could all ultimately result in nothing, but it makes for a better spectator event to know that these companies must feel an extreme sense of urgency these days at even the thought that there could be some attempts on cloistered markets made by otherwise non-traditional competitors.
     
  19. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,325
    Likes Received:
    93
    Location:
    San Francisco
    As I wrote in another thread..without a hw rasterizer (and TMUs as well..) they're headed to failure.
    In a few years from now would probably easier for nvidia to slap some x86 CPU/decode logic onto their GPUs that for Intel adding competitive GPU features to their CPUs..
     
  20. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    213
    Location:
    Uffda-land
    I don't think they can afford to fail this time. They can't afford the typical Intel "Meh, we tried for two years and it's not working, screw it" mentality.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...