[Beyond3D Article] Intel presentation reveals the future of the CPU-GPU war

Money isn't everything though. Corporate ethos is fairly important too. All the smart GPU guys in the world can't make a product a success if the rest of the organisation won't or can't let them.

Of course, but aren't we a going a bit too far with this?I somehow doubt the usual brain-storming session at Intel goes on something like(simplified and unnacurate):
"Must design new GPU shit that renders Crysis at X FPS->think think think->Sir, we've got it!->Show a GPU design->Management:Shoot the imbeciles, no x86 in there...must have something CPU related, our blood oath demands it, and David Kirk said so as well!"

GPUs evolved into what they are today because nV's and ATi's and whomever's smart guys were faced with the problem of rendering doing 3D within certain parameters. Why do we automatically assume that Intel's smart guys, when faced with the same issue, won't come up with something quite similar for the solution?
 
Why do we automatically assume that Intel's smart guys, when faced with the same issue, won't come up with something quite similar for the solution?

It's not the smart guys I'm questioning, Intel has some of the smartest engineers on the planet of that there's no doubt. Their marketing and their management on the other hand it's less obvious. From what I've heard their corporate ethos is very process oriented, and doesn't react quickly to rapid changes in external demands.
 
i recognize them from P4 NetBust Days! .. and i suffered with intel all the way through with my "EE" until my current e4300 which was a "side" project for them.

That's not quite correct. They had planned the transition to a Pentium-M based or inspired CPU for a while before Prescott crashed and burned. They just went sped the transition up.
 
It's not the smart guys I'm questioning, Intel has some of the smartest engineers on the planet of that there's no doubt. Their marketing and their management on the other hand it's less obvious. From what I've heard their corporate ethos is very process oriented, and doesn't react quickly to rapid changes in external demands.

Their latest efforts towards trimming the fat off of their management structures suggests that they're implementing change on that front. I'm leaning towards keeping an open mind when it comes to Intel(or any serious player for that matter) as opposed to simply discarding them just because. Theirs is a success story after all, and a success that has been maintained throughout quite a stretch. Which does not mean they are impervious to flopping.
 
Where can we see this in action, that's what I want to know. NVidia could make a lot of friends getting h.264 encoding out there unlike the useless crap that ATI tried a few years back.

Jawed

Not aimed at the consumer market really, but elemental technologies has an accelerated h.264 encoder available that makes use of Nvidia hardware.
 
Not aimed at the consumer market really
Uhm... It very much is aimed at the consumer market. They are currently not selling it there, but will start doing so in a few months, and I heard from a few people that NVIDIA is demonstrating it as a selling point to OEMs already. However, if they stick with their claimed pricing strategy (i.e. comparable to competitors), and NVIDIA lets them do such a stupid thing, then obviously I'm not very confident in their ability to make a real impact.
 
Not aimed at the consumer market really, but elemental technologies has an accelerated h.264 encoder available that makes use of Nvidia hardware.
Looks like it'll be fun to compare against x264 running on a four-core CPU... They really need to be doing more than baseline encoding though, saying you've got h.264 encoding but no CABAC is a bit of a con if you ask me...

Jawed
 
Looks like it'll be fun to compare against x264 running on a four-core CPU... They really need to be doing more than baseline encoding though, saying you've got h.264 encoding but no CABAC is a bit of a con if you ask me...
They've got main profile running according to a recent interview on a french website. They won't have high profile at launch though, apparently - but given that their main audience will be those transcoding for lower-end devices including handhelds, I don't feel the lack of high profile encoding is a huge problem (although obviously it'd be a nice addition and not quite as hard as high profile).
 
Uhm... It very much is aimed at the consumer market. They are currently not selling it there, but will start doing so in a few months, and I heard from a few people that NVIDIA is demonstrating it as a selling point to OEMs already. However, if they stick with their claimed pricing strategy (i.e. comparable to competitors), and NVIDIA lets them do such a stupid thing, then obviously I'm not very confident in their ability to make a real impact.


Any performance disparity between dual precision and single precision ?
 
Actually that's not completely true. It's pretty damn clear from Jen-Hsun's comments at Analyst Day that he wants CUDA to be a major factor in the consumer space - and my info indicates that they have already got design wins thanks to that focus.

The point remains, though, that I am massivey unconvinced that they know how to win this batte. They've started focusing on this just now, presumably in great part because Jen-Hsun went 'omfg' at the H.264 Encoding presentation he received from those guys. They could have started 18 months ago - but they didn't get the importance of that back then, and focus exclusively on HPC. That lost them valuable time.

So what's the right strategy? In-house R&D. And just as opening up CUDA (making parts open-source etc.) is the right strategy for HPC, it's NOT the right strategy for the consumer space, yet they'll likely do the former in a way that'll also result in the latter. So yeah - I really don't think they have a good grasp of that space right now, which means that until I see any evidence that they do or that I get to actually shout at them in person about how they don't have a clue, I won't be very confident in their prospects on that front. The key to understand is that it's *incredibly* time sensitive, and if not done fast enough it risks being rather worthless.

And hello Wesker! :) Sorry for having to contradict you in your first post, but if that makes you feel better apoppin is indeed a bit crazy here (no offense intended!) ;) I mean, 'privileged with a vision'? errr... and metaphoric speech like that just doesn't work over multi-paragraph texts, especially not on the internet.

As for what apps are possible via CUDA - I think if you look at any CPU review testing multi-core applications, you'll find plenty of ideas. No, not every single one of those could be done on the GPU - but many, many could be, or at least be substantially accelerated instead of completely offloaded (which also adds a lot of value). If you want me to be a bit more precise here, just ask.

No offence taken. ;) In fact, I'm actually glad that you took notice of my post.

May I ask you how NVIDIA should go about shifting GPU's (particularly to OEM's and desktop consumers) for the sake of running CUDA?

I just can't picture the Dell's and HP's to spend the extra money to integrate an NVIDIA GPU into their systems, just to increase performance of a range of select applications (which the average Joe user probably won't even use in the first place).

CUDA and NVIDIA's GPU revolution sounds great, in theory. But the idea runs out of steam very quickly, IMO.

Besides, isn't this one of the reasons why Intel is designing Larrabee? Should GPU's truly spark a revolution in the processing landscape, then Larrabee will be Intel's insurance. Of further note is that AMD has ATI for back up as well.
 
May I ask you how NVIDIA should go about shifting GPU's (particularly to OEM's and desktop consumers) for the sake of running CUDA?

I just can't picture the Dell's and HP's to spend the extra money to integrate an NVIDIA GPU into their systems, just to increase performance of a range of select applications (which the average Joe user probably won't even use in the first place).
But that's the key point: quad-cores also only increase performance on a range of select applications in practice! So the key isn't to accelerate a billion applications; the key is to make it more valuable for Joe Consumer to have a powerful GPU than a quad-core or even a tri-core. For the 2009 Back-to-School cycle, Intel's line-up will look like this:
Ultra-High-End: 192-bit DDR3 Quad-Core Nehalem
High-End: 128-bit DDR3 Quad-Core Nehalem
Mid-Range 1: 128-bit DDR3 Dual-Core Nehalem
Mid-Range 2: 128-bit DDR2/3 Dual-Core Nehalem [Third-Party Chipset]
Low-End: 128-bit DDR2 Dual-Core 3MiB Penryn
Ultra-Low-End: 128-bit DDR2 Single-Core Conroe [65nm]

So let's make the goal clear: encourage OEMs and customers to stick to dual-cores, and potentially even Penryn, in favour of using a more expensive GPU. As you point out, this won't work if you only accelerate select applications; so the solution is simple: do massive in-house R&D for a wide variety of suitable applications, and release the results as freeware and free closed-sourced libraries for third party applications to use.

I have a list of suitable applications if you are interested. However, it should be easy to make up your own by looking at any modern CPU review and pondering whether the multi-core applications benchmarked could be accelerated via CUDA. The answer is 'yes' in a surprisingly high number of cases; you're unlikely to parallelize LZMA/7Zip, but there are a lot of things that you *can* do in CUDA, especially with shared memory.

CUDA and NVIDIA's GPU revolution sounds great, in theory. But the idea runs out of steam very quickly, IMO.
It runs out of steam instantly if you don't have enough applications on it. But if you got a bunch of applications running on it, you can simultaneously reduce the value-add of quad-core and increase the value-add of GPUs. You won't get many design wins in the enterprise space for it; but honestly, if the decision makers are rational (errr, that's a bit optimsitic) that market should become a pure commodity market and ASPs should go down 10x. So yeah, you can't add value there, but that's really not the point.

Besides, isn't this one of the reasons why Intel is designing Larrabee? Should GPU's truly spark a revolution in the processing landscape, then Larrabee will be Intel's insurance. Of further note is that AMD has ATI for back up as well.
Yes, but once again, basic arithmetic tells us that Intel will be in big trouble if GPU/GPGPU ASPs grow while CPU ASPs lower, because the chip represents nearly 100% of the BoM for a CPU but much less than half for a GPU. So if the end-consumer market is constant, replacing every CPU by a GPU would result in 50-80% less revenue; i.e. utter financial disaster.

Larrabee is a great product for Intel if it just eats away at GPUs. But sadly for Intel, the world is more complex than that, and their fate will be decided in the 2009 Back-to-School and Winter OEM cycles, likely before Larrabee will be available. This is another reason why in-house development is key by the way; in that timeframe, third-party software developers might want to hedge their bets with Larrabee, and clearly that would have negative consequences. And given the fact that the potential profit opportunities here dwarf those of traditional GPU applications, I'd argue it'd be pretty damn dumb to let third parties decide of your fate. Such a strategy has very rarely worked in the past, and it's not magically going to work now. Can they still help? Well yeah, duh. But once again that's not the point.
 
Strangely .. . .. *all* of you guys are actually right

CUDA is what nVidia is betting on to take on intel as they make their foray into nVidia own territory with Larrabee
--i have been watching this brewing for a long time .. i even have a Silly Pet Name for the Upcoming confrontation of intel vs nVidia
The War of CUDA vs. OctoBeast!
- and i am making popcorn .. lots of it


Yes, larrabee is the future .. how soon? - we'll see
- might be many years. Perhaps intel would have got "lucky" with P4 also
[my opinion: larrabee will take 5 years and then intel will dump it or really modify it]

CUDA is dong unique things - just read thru nVidia marketing
- will consumers care? - that is the question as Intel's PR attempts to erode nVidia Marketing
--[my opinion: CUDA will make it because nVidia PR is really smart]

So .. you all make good points
-[my opinion: let's wait and see - it is a WAR of the Giants! .. a genuine Davey-Jen Vs. Goliath-board

all we can do is observe the spectacle and some of may be privileged to do more than watch - or not .. but our speculation does NOT take away from the spectacle


Anyway, I think apoppin needs to realise that he's now among people who actually develop with Cuda, some even for a living.
Now you know WHY i am here and not the "other place" anymore - but i am not here to argue; just to add to what i know and adapt what had i hold that is in error back into line with reality and i will adjust my overall view again.
actually apoppin does also know and he was privileged to get a glimpse of the Vision; i know some of you are under NDA, i am not .. but i also never "blab" details .. that is a no-no anywhere. However, a 'tease' is even encouraged .. and you cannot say differently.

i teased the guys at ATF 6 weeks ago; tell me my predictions were "full of it". i even gave some hints regarding the upcoming CUDA and OctoBeast war in that thread:

http://forums.anandtech.com/messageview.aspx?catid=31&threadid=2164918&highlight_key=y

GT200 is apparently already taped out and Waiting for r700!

btw, i guess i made the error of keeping my same username when i signed up here over 2 years ago. it appears that i cannot post in speculation threads at B3D at all as my user name sticks out like a sore thumb =( . . .which is really fine with me; i am done with speculation threads .. can you please point me to the "white paper" discussions .. i am looking to develop my benchmarking skills and especially i like to specialize in IQ comparisons of the filtering used. i knew this site a lot better before i got "stuck" at the other Klein's bottle
[it took me time to figure how to break out =P]

so please, let me say goodbye to these speculation threads .. and i will head to the "white paper" room and be quiet for awhile. i tired of controversy ,,, for awhile; we will all know soon enough anyway

Peace and aloha!
 
Last edited by a moderator:
We don't really need to speculate about Cuda, it's already out there (I've played around with it a bit myself). Obviously Cuda will become more powerful with future generations of nVidia chips... but I don't expect anything dramatic in the next 1.5-2 years (until Larrabee arrives), just more processing power, slightly nicer programming interface and some extra features.

Also, there is a considerable difference between nVidia and Intel. Regardless of how good nVidia's GPUs are and will be, they STILL need Intel for their CPUs (and most people also combine these with Intel's chipsets). On the other hand, if Larrabee becomes a success, who needs nVidia? In a way nVidia's future depends on staying ahead of Intel/Larrabee.
On the other hand, Intel can afford fail with Larrabee, and still be the largest CPU/chipset/IGP manufacturer in the world by far.
 
We don't really need to speculate about Cuda, it's already out there (I've played around with it a bit myself). Obviously Cuda will become more powerful with future generations of nVidia chips... but I don't expect anything dramatic in the next 1.5-2 years (until Larrabee arrives), just more processing power, slightly nicer programming interface and some extra features.

Also, there is a considerable difference between nVidia and Intel. Regardless of how good nVidia's GPUs are and will be, they STILL need Intel for their CPUs (and most people also combine these with Intel's chipsets). On the other hand, if Larrabee becomes a success, who needs nVidia? In a way nVidia's future depends on staying ahead of Intel/Larrabee.
On the other hand, Intel can afford fail with Larrabee, and still be the largest CPU/chipset/IGP manufacturer in the world by far.
You make good points

i just say nVidia is "holding back" on CUDA .. GT200 will completely open up applications not yet used very well by CUDA before; that is what nVidia's driver team has been working on; they let some other projects lapse a bit; i noticed! --You are not playing with GT200 yet are you? - not with the latest drivers that open up it, are you?(!). if so i am impressed.

anyway .. it does not matter what i think, what i think is unimportant

OK .. let me go in peace .. my ego does not require that i be "right" more than 25% of the time =P

i require white paper discussions, please ... benchmarking is what i want to learn
-the 'controversy' is more limited there i hope

EDIT: i could NOT resist

Intel can NOT afford to fail like with P4 and NetBust

this time they have two competitors
One has learned from the last Encounter and the Other is Merciless
- my prediction Intel becomes a lesser player in 5 years and never recovers

NOW .. aloha!
 
Actually that's not completely true. It's pretty damn clear from Jen-Hsun's comments at Analyst Day that he wants CUDA to be a major factor in the consumer space - and my info indicates that they have already got design wins thanks to that focus.

The point remains, though, that I am massivey unconvinced that they know how to win this batte. They've started focusing on this just now, presumably in great part because Jen-Hsun went 'omfg' at the H.264 Encoding presentation he received from those guys. They could have started 18 months ago - but they didn't get the importance of that back then, and focus exclusively on HPC. That lost them valuable time.

This seems like an incredible leap of logic here - that JHH saw the H.264 and had a moment of realization. Obviously he was excited. But we really don't know about NVIDIA's efforts to push CUDA with software developers, and developer relations is not an area where NVIDIA is known to skimp. Clearly they are working with Adobe, or least Tim Murray is working on Adobe.

The reason, IMO opinion that Tesla seems so high profile, is that it is a new hardware line that NVIDIA generates business from. It is now and there are real sales to be made from it. Look at how long they waited to really make an effort to highlight the importance of a GPU versus a CPU to mainstream customers. They could have done so years ago, it would have been true (Vista), but it might have fallen on deaf ears. But they have patiently waited until CPU scaling ended, CUDA, and more financial wherewithal to make a campaign. It's a masterstroke.
 
Also, there is a considerable difference between nVidia and Intel. Regardless of how good nVidia's GPUs are and will be, they STILL need Intel for their CPUs (and most people also combine these with Intel's chipsets). On the other hand, if Larrabee becomes a success, who needs nVidia? In a way nVidia's future depends on staying ahead of Intel/Larrabee.
On the other hand, Intel can afford fail with Larrabee, and still be the largest CPU/chipset/IGP manufacturer in the world by far.
Sure, Intel has an option to fall back onto, but I don't see why that's such a big deal. NVIDIA's future depends on wether it stays ahead of Intel, but the same goes when you replace Intel with ATI (even if you left AMD out of the equation). That's the thing about having only one main market you're focussing on (chipsets wont be enough to keep NVIDIA floating), you always have to innovate and do research to stay on top, if not, that's the end of your company. NVIDIA has "lived" this way since the company was founded, and Intel joining the game makes no difference whatsoever.
EDIT: As an adition to the above, although Intel has big CPU and chipset divisions to fall back onto, they are still a money driven organization. If their GPU business with Larrabee won't take off, they won't keep pumping money into that forever. So although the Larrabee team may have a better "buffer" to fall back, they also have to innovate and do research to get/stay on top, otherwise it's the end for that division, just as it would be for NVIDIA. Is the difference that big?

By the way, why don't we look at the advantage NVIDIA has? Not only does it have years and years of experience in the GPU field, it also has an advantage on Intel because of it's sole focus on GPUs. NVIDIA can change and adapt to the market much faster, and transform the entire company as needed.
 
Last edited by a moderator:
This seems like an incredible leap of logic here - that JHH saw the H.264 and had a moment of realization. Obviously he was excited. But we really don't know about NVIDIA's efforts to push CUDA with software developers, and developer relations is not an area where NVIDIA is known to skimp. Clearly they are working with Adobe, or least Tim Murray is working on Adobe.
I've got good reasons to believe executive management looked at CUDA as primarily a HPC thing until recently.
The reason, IMO opinion that Tesla seems so high profile, is that it is a new hardware line that NVIDIA generates business from. It is now and there are real sales to be made from it. Look at how long they waited to really make an effort to highlight the importance of a GPU versus a CPU to mainstream customers. They could have done so years ago, it would have been true (Vista), but it might have fallen on deaf ears. But they have patiently waited until CPU scaling ended, CUDA, and more financial wherewithal to make a campaign. It's a masterstroke.
It's a very good strategy on paper, I'll grant you that. However, marketing campaigns saying "our technology adds more value than the other guy's technology" only really work when it's either already common knowledge, or there is very clear evidence of that being true. Right now, outside of gaming, all they've got to show for it is video decoding, video encoding and some Photoshop stuff. Clearly there's an audience for that, but it's not enough to really convince the majority of the public.

And my point is that these algorithms and applications won't magically come out of third parties suddenly and for no good reason, so plenty in-house R&D will be necessary for consumer applications of CUDA. So in a way, what I'm really evangelizing for is for NV to hire 100 more Tim Murrays and create a coherent plan around that. However, given that Tim still wants to punch me in the face for a certain something, I'd prefer for those 100 persons not to be Tim clones! ;) (plus, you really really want people with different specific fields of expertise). They should also aggressively acquire companies like Elemental in the coming quarters, or at least invest substantially in them to lock them in via their minority holding.
 
i just say nVidia is "holding back" on CUDA .. GT200 will completely open up applications not yet used very well by CUDA before; that is what nVidia's driver team has been working on; they let some other projects lapse a bit; i noticed! --You are not playing with GT200 yet are you? - not with the latest drivers that open up it, are you?(!). if so i am impressed.

I think your crystal ball is malfunctioning. Cuda is already a very complete product. Most of what's missing is on the software/language level, not the hardware.
The hardware mainly needs to become faster/more efficient at what it's already doing.
So I don't know where you got this idea that some kind of 'magic drivers' would suddenly make Cuda the best thing since sliced bread, or how GT200 will completely turn things around, but I really don't see how any of that would be possible.
What *does* seem likely to happen is that nVidia comes up with some 'killer apps' for Cuda technology... Currently there basically are none. PhysX, video encoding and PhotoShop... that sort of stuff would be useful to people.

Intel can NOT afford to fail like with P4 and NetBust

this time they have two competitors

With all due respect, AMD is not a competitor anymore. They're way behind on both Intel and nVidia. Even if their next GPU would perform well, they still lack a good GPGPU programming framework and userbase (again, developer relations).
I don't see AMD getting back into the game with CPUs or GPGPU on short notice.

And sure Intel can fail with Larrabee. It's a much smaller project than their CPUs... It's just a 'side-project' for Intel. The investment they're putting in is not going to put Intel in any financial trouble whatsoever. They can develop things like Larrabee on the side because the CPU division is running so nicely.
AMD currently has the opposite problem... Their CPU division is bleeding cash, and even if their next GPU is a success, it probably cannot make AMD profitable as a whole, because the market is not big enough.
 
Last edited by a moderator:
Sure, Intel has an option to fall back onto, but I don't see why that's such a big deal. NVIDIA's future depends on wether it stays ahead of Intel, but the same goes when you replace Intel with ATI (even if you left AMD out of the equation). That's the thing about having only one main market you're focussing on (chipsets wont be enough to keep NVIDIA floating), you always have to innovate and do research to stay on top, if not, that's the end of your company. NVIDIA has "lived" this way since the company was founded, and Intel joining the game makes no difference whatsoever.

It makes a difference in how both companies are going to play this game.
Intel is so much bigger than nVidia, that they could even choose to lose money on their GPU department if that means they can win marketshare. nVidia could never do that.
On the other hand, they may lose interest, and just withdraw. But if nVidia is successful in leveraging Cuda, then that is very unlikely, because they'd be cutting into Intel's CPU sales aswell at that point.

By the way, why don't we look at the advantage NVIDIA has? Not only does it have years and years of experience in the GPU field, it also has an advantage on Intel because of it's sole focus on GPUs. NVIDIA can change and adapt to the market much faster, and transform the entire company as needed.

Intel has shown that they can adapt and transform quite well, because although they're big as a company, they work with multiple teams... Think about how they developed Pentium M/Core on the side, then moved focus from the Pentium 4 to Core2. They just developed both products side-by-side, and then picked the best one for the future. In the meantime they started working on Atom, yet another independent product-line. And Itanium is still going, but will more or less be merged with x86-technology now. And ofcourse there's Tera-scale, which spawned Larrabee, and might spawn other technologies in the future.

So currently there's a small dedicated Larrabee-team, and if the project is successful, Intel will just put more focus on it. If not, they'll just do something else.
So Intel is basically free to experiment in all kinds of areas... nVidia basically only has one product-line, and a failure can mean the end of the company.
 
i just say nVidia is "holding back" on CUDA .. GT200 will completely open up applications not yet used very well by CUDA before; that is what nVidia's driver team has been working on; they let some other projects lapse a bit; i noticed! --You are not playing with GT200 yet are you? - not with the latest drivers that open up it, are you?(!). if so i am impressed.

So really, what should we expect to see with CUDA on the GT200?

Yes to double precision.

Yes to better DX/GL interoperability (ie at least faster "context" switch because it is probably needed for PhyX). They already have multiple kernel API in Cuda 2.0 via streams.

I don't think it is in the plan to expose parts of the fixed function graphics hardware in CUDA. So no there.

CUDA spec still has .surf (surface cache) which is not implemented, so perhaps, but I still have my bets on no here. I think they need to expose this, at least give write to texture without extra memory copy.

Anything else?
 
Back
Top