What are the benefits of integratiing GPU/CPU into single package?

OVERLORD

Newcomer
Haven't noticed it posted on forum so I appologise if this has been posted and discussed already.

See here http://www.anandtech.com/tradeshows/showdoc.aspx?i=2511

With the arrival of muliticore CPU's in PC's and those proposed for nextgen consoles. The pro's & con's have been vehemently speculated here and elsewhere for next gen consoles.

What impact could this recent development play in future consoles?

INTEL may nolonger be in the game this gen but what impact would combining CPU/GPU cores in single package have?

What impact will this have on the industry and future advancements in architecture and those companies who have seen huge advancements via collective collaboration and competition.

Could GPU run @ same speed as CPU's for a start? There is even speculation of the PCI/E controller being integrated in future CPU's from AMD.

What impact in relation to heat would this have?

Maybe someone more savvy than me can answer some of the above.

Not wholly Console related I know but this does fall within the argument of size vs performance previously discussed here in relation to nextgen consoles.

Interesting times indeed.
 
I'd say high end GPUs are getting to big to be integrated into the CPU silicon.

Any GPU/CPU combo will be a budget solution.
 
It makes sense when dealing with old/low budget IC, like EE+GS, but it would be way too costly for high end parts.
 
Agreed, no way could it be done with bleeding-edge components; not at these die sizes. I mean even the eDRAM and the Xenos are on seperate dies. But doubtless as time goes on it makes increasing sense to put yesteryears silicon on more highly integrated chips. Intel's graphics solutions are pretty transistor-lite, so for them it's no real issue. And for them it's not even a unified die - rather just a seperate die on chip. For this generation of consoles, we'll see if five years from now we have more EE+GS equivelents. I would certainly imagine that were a goal.
 
With high end parts you have the issue of heat and power consumptions. Sticking two large, power hungry, hot chips in the same package is a recipe for problems.

Imagines a P4 3.8GHz + 6800 Ultra :oops:

There are two other issues. GPU's need their own pool of really fast memory--which also needs to be cooled. So there is going to have to be two busses. And at 256bit for high end cards that is a lot to fit into a standarc IC.

The other issue is the CPU <=> GPU bus. While having two chips on the same package could mean some really neat stuff for fast connections you run into the issue of would they spend the time to do something killer and the upgrade issue... what happens when your GPU is outdated but your CPU is not?

Even in the above scenario the reality is they would most likely use a PCIe connection... which begs they question: Why even stick them on the same package? You would have more flexibility by using an onboard PCIe bus and allowing two distinct chips... this would be cheaper for yeilds and reduce the heat issue, etc...

At the budget end this may make sense, but really CPUs can be speeded up in other areas. Why not a Dual core CPU with a third large vector processing unit that is easy to program for. The PPC core in the 360 is capable of almost 40GFLOPs and it is only ~40M transistors for the entire core. With the transistor budget still increasing BUT the frequencies halted more specialized processing units would be great.

PCs are already killer at processing general code... bulking their FP performance up would be well worth the effort IMO. Honestly I would have rather seen this move than straight dual core. e.g. 1 AMD64 + 1 Super fast vector unit = Media and Gaming bliss!
 
Something like this (which is going to be available for PCIe)

http://www.tomshardware.com/hardnews/20050825_210332.html
During a demonstration at IDF, the PCI Express - consuming a total of just 25 watts - card showed a performance peak of about 55 GFlops with a minimum of about 45 GFlops. Putting this into perspective, a 2-way Xeon system with 3.2 GHz processors was shown running parallel to the card with a peak performance of about 10 GFlops.

~45GFLOPs vs. ~10GFLOPs for a 2 Xeons of *real world* performance.

I believe the chip runs at 500MHz. 2 chips at 25W is not much... if Intel got their hands on this and tweaked it to be very user friendly for programmers and to seemlessly work within the design... WOW.
 
The Clearspeed chip's been brought up a number of times in previous threads. I think it's a cool idea (the whole compute-expansion via slot), but the consensus has generally been that the cards themselves are only suitable to a very narrow range of applications, despite/in spite of their Flops potential.
 
xbdestroya said:
The Clearspeed chip's been brought up a number of times in previous threads. I think it's a cool idea (the whole compute-expansion via slot), but the consensus has generally been that the cards themselves are only suitable to a very narrow range of applications, despite/in spite of their Flops potential.

Key word is "like". It is an example ;) To demonstrate they exist.

Another example would be the CELL SPEs. Yes, they are limited also, but same theory holds: Additional processing elements that excell at FP performance (the one area PC CPUs really lag). I think for the PC space you obviously need a specialized solution. Something that is user friendly. Having 1, at most 2, units on a PC CPU would be best. Make them bigger, more robust, etc... You wont get the same peak performance of smaller/more specialized units, but the benefits would be pretty impressive.

Asking PC developers to try to balance a ton of "co-processor" like units would not be good. Just one or two heavy lifters would do fine.

Anyhow, any feature you add (Physics chip, Clearspeed FP co-processors, SPEs, etc) are going to be limited in functionality and in what they can work on (hint: none of those are very robust compared to a PC CPU) but something that fits the PC space would be nice. And considering PCs are looking at 1.3B transistor chips down the road I *think* they could squeeze a robust FP co-processor or whatnot in ;) The added media potential would be really nice.

Much better than integrating a GPU IMO.
 
Well, I just think the integrated GPU is just targeting Intel's low-end OEM market, maybe laptops also. Talk is they're going to leave the IGP-chipset business anyway right? Well - this looks like their possible replacement strategy to that. :)

Anyway I completely agree that any one of those other things 'on-chip' would be much more desirable than a mediocre graphics core. Well, for people like 'us' at least...

As for the Clearspeed, well I think the Cell/SPE's are more versatile. But the Clearspeed has definite applications and uses, and I think it's a great idea. It's interesting to read about the add-in card and think about the fact that indeed, as far as supercomputing/scientific workstation work goes, Cell and Clearspeed will be fighting over a lot of the same markets. It'll be interesting to see if both, either, or neither eventually catch on in any of those markets.
 
You can put a fat pipe between the two, share a memory controller and do a few little tricks here and there.
 
For a CPU + IGP/budget GPU solution in which both share the same memory, it would save some MB space and transistors/etc, no? Seems very desirable for laptops and Mac mini-a-likes.

And if IGPs tend to omit vertex shaders altogether, this closer pairing might help performance in that regard, as Saem said.
 
  • Like
Reactions: Geo
Pete said:
For a CPU + IGP/budget GPU solution in which both share the same memory, it would save some MB space and transistors/etc, no? Seems very desirable for laptops and Mac mini-a-likes.

And if IGPs tend to omit vertex shaders altogether, this closer pairing might help performance in that regard, as Saem said.

The problem is 3D is the next "big thing". It has followed the same general curve as other market items and is finally hitting the market top to bottom. Specifically for PCs, Windows Vista will offer users a lot of new experiences with 3D. Neutering a PC sounds like an aweful short sited move. Introducing these cut down 3D engines that really are worthless really does not help the consumer.

In the long run this seems feasbiel. If Intel used a TBDR and a new memory standard was adopted (like DDR3 which is said to have over 20GB/s of bandwidth... to compare DDR400 Dual Channel is a mere 6.4GB) and such, but you still have heat and power issues.

Not to mention that Graphics technology and CPU technology are not parallel. You are talking about updating both on a more frequent basis which means higher costs and probably less advancements. The only reason Intel would integrate would be to get more of the market.

But as it is there is an entire field of processing that could be improved that would help graphics, sound processing, media and media encoding/decoding, etc...

I would see it as a big loss if Intel started including GPU's standard on their CPUs yet they still had issues hardware encoding a HD movie stream while doing other tasks. Physics and AI are not magically going to get better either.
 
Short answer. There are no significant benifits.

That kind of locality just to get enourmous bandwidth between processors is not interesting.
 
Not only are there benefits in terms of communication, but there are also gains to be had from shared resources. It would mean a lot to CPU performance to have access to 512 MB of 40GB/s or so of memory... We have a large and expensive memory pool there that goes to waste for anything you do on your PC except during certain specific 3D scenarios.
 
Because they are doing more of the processing chores, GPUs may become the CPUs of the future. ;)

The latest GPUs now perform non-graphics functions such as motion planning and physics for game AI, making them the ultimate processor for entertainment applications. In addition, GPUs can compute fast-Fourier transform functions such as real-time MPEG video compression or audio rendering for Dolby 5.1 sound systems.

Source: Computer, October 2003
 
Thank you all for response.

standing ovation said:
Because they are doing more of the processing chores, GPUs may become the CPUs of the future. ;)

Funny you say that. I saw this when researching previous link.

I wasn't only targetting Intel with this observation as Nvidia and others could also be in the frame for something like this. Read http://www.wired.com/wired/archive/10.07/Nvidia.html

Looking @ Nivdia and how they push the envelope with their GPU's and now pulling the strings for chipsets be it on Intel or AMD ie SLI capability and prevous Soundstorm chipset integration.

I see this development as a natural progression in Computing architecture.

The two stories although seperated by several years appear to be following this path.
 
Last edited by a moderator:
Interesting how nVidia are talking of a synergy with Sony's view of the future. Perhaps nVidia will have some input in Cell2?
 
Back
Top