Will Future CPUs have On-Die Coprocessors and Would Those Have Any Impact on Graphics

Farid

Artist formely known as Vysez
Veteran
Supporter
I was browsing PC.Watch and I stumbled upon a very interesting article from Hiroshige Goto on AMD Torrenza:
http://pc.watch.impress.co.jp/docs/2006/0713/kaigai287.htm
To translate use Excite's translator:
http://www.excite.co.jp/world/english/web
And in the light of the recent events on the AMD/Ati side, I find these diagrams to be really interesting:

kaigai287_03l.gif

kaigai287_02l.gif


Of course, the fact that quite a few rumors started spreading around force us to start asking ourselves, how this new paradigm shift, Torrenza, will be inflicted by the Ati buyout.

Will we see only low-end on die CPU-GPU intergrations, or will we see something else, something more "ambitious"?
 
Personally, I think the integrated GPU will take the form of a large FPU array, which can then be tasked with graphics, physics, media and other things.
 
Will we see only low-end on die CPU-GPU intergrations, or will we see something else, something more "ambitious"?

There is no question in my mind they are aiming for something more ambitious. I tend to think they are aiming at market territory reserved today for such as X1600 --in other words, at least the lower 1/2 of the midrange discrete market.

Now, will they get there? Hellifino. Many a marriage has started with great optimism and enthusiastic swapping of bodily fluids, only to end less happily a few years down the road.

Whether Torrenza will serve as the basis for that effort, I have no clue.
 
And obviously, as nAo (or Deano) will gladly explain you, CELL is the best GPU ever.
More seriously, Torrenza is a very interesting option, but I'm not convinced it's the best one for the industry, let alone for competition's sake long-term.

Uttar
 
And obviously, as nAo (or Deano) will gladly explain you, CELL is the best GPU ever.
Co-Processors tend to be specialized units, in general, unlike Cell's SPE which are just streamlined cores.
 
no doubt we will see future CPUs combined with on-die graphics processing. but I believe that for the most part, the graphics portion will be low-end to mid-range.

Intel has stated that their future CPUs will contain graphics processing. obviously AMD-ATI is aiming for similar products

the best graphics will still be done on seperate GPUs, at least for the forseeable future.



then there is something that I am wondering about, a truly new class of processor, neither a CPU nor a GPU. call it an EPU (entertainment processing unit. lol) that's more unified, does all types of processing; the work done by general purpose CPUs, physics processors, sound processors and graphics processors. a true system-on-a-chip, with very large amounts of on-die memory. billions & billions of transistors. lets say that these processors are in the ~5 GHz range. well, the graphics portion, I would hope, would be able to take advantage of the far higher clockspeeds of CPUs, in addition to the massive parallelism that GPUs already have. this might be one of the ways we get a big leap in graphics quality over current DX9 sm3.0 and upcoming DX10 sm4.0 GPUs. pure speculation, but that's where I imagine graphics going over the next 5 to 10 years.
 
Last edited by a moderator:
The other two questions, aside from how far up the foodchain can AMD/ATI go:

1). How far up the same foodchain is Intel looking to go? (and the same caveat as AMD/ATI above --how successful will they be at trying it?)
2). If both AMD/ATI and Intel are successful at driving integrated thru the bottom 1/2 of the current graphics heirarchy, then how much of NV's marketshare can they lose at the bottom and still be viable? Or do they just go merrily on their way competing in the top 1/2?
 
I see System level being done fairly quickly & should provide a good performance boost by massive increase to bandwidth & decrease of latency CPU <-> GPU.

Socket level integration is probably more problematic as you need to work out some way of providing off-socket & high bandwidth RAM to the GPU but at the lower-mid range you could probably get away with normal system RAM, in which case this could be first out of the block.
Or, you could probably get say 64MB on-package?

ATI would be probably first out of the block with Torrenza GPUs but there is no reason that Nvidia can't join the party.
 
Socket level integration is probably more problematic as you need to work out some way of providing off-socket & high bandwidth RAM to the GPU but at the lower-mid range you could probably get away with normal system RAM, in which case this could be first out of the block.
Or, you could probably get say 64MB on-package?

They could just solder the high speed ram on the package/substrate that goes into the socket, and ditch the DIMM sockets for the second CPU socket all together.

I'd like to see CPU packages like this as well: 2GB per socket on the CPU substrate, ditch DIMM sockets to allow for double up on CPU sockets in a cost effective manner, etiher 1->2 or 2->4 (I know the performance increase doesn't make up for the lack of flexibility, so we won't see this anytime soon, but a guy can dream, right ?)

Cheers
 
If it doesn't work out with the socket, the thing is doomed. We all like to be able to upgrade stuff, which would be impossible with soldered-on RAM or integrated gfx. The only thing I see happening here is a bit better low-end and lowering the costs, but no impact whatsoever on the mid to high-end side. Goes for both platforms.
 
If it doesn't work out with the socket, the thing is doomed. We all like to be able to upgrade stuff, which would be impossible with soldered-on RAM or integrated gfx. The only thing I see happening here is a bit better low-end and lowering the costs, but no impact whatsoever on the mid to high-end side. Goes for both platforms.

I think upgrading is mostly a non issue, for the general pulic at least. How often do you upgrade anyway?

if it's less than every two years you'd normally upgrade:
1. CPU (obviously),
2. Motherboard, -because the CPU socket du jour changed
3. DRAM type, because of changes to the motherboard and to take full advantage of your new CPU
4. GPU (obviously)

All in all: basically a complete upgrade.

Cheers
 
Most people will upgrade sequentially. Whatever component is the weakest link. I'll usually upgrade the gfx card twice as often as the CPU and add more RAM somewhere along the way, and then later upgrade the CPU+mobo prior to the next gfx upgrade, while keeping all the rest as long as I can.
 
I see System level being done fairly quickly & should provide a good performance boost by massive increase to bandwidth & decrease of latency CPU <-> GPU.

But the question is: Do we really need that boost on bandwidth etc between the CPU and GPU when the target is high performance graphics? It all sounds cool with this 'integrated' in a concole, but I frankly haven't seen the need in PC gaming yet. The bottleneck is still the GPU and/or CPU itself.

No, all this talk is about providing a cheap midrange integrated GPU option all in 'one package' at more decent prices than a similar combo cost today. Which in itself is nice of course. :smile:
 
Tell the designers of Xbox360 & PS3 that there is no gain to be had in linking the CPU & GPU with big bandwidth cos they seem to think its a good idea :???:
 
Tell the designers of Xbox360 & PS3 that there is no gain to be had in linking the CPU & GPU with big bandwidth cos they seem to think its a good idea :???:

Those are already theoretically slower then the current super-high-end PC HW, so in the PC space these would be upgraded this or maybe next year.

Though that's not comparable in any way, consoles being what they are. The reasons for the integration there are cost savings and it's possible thanks to the platform running highly customized/optimised proprietary code.
 
Those are already theoretically slower then the current super-high-end PC HW, so in the PC space these would be upgraded this or maybe next year.
care to elaborate?
Maybe I'm not aware of these super-high-end PCs..
 
I mean that the current top of the line features more flops in pretty much every area, or will within a year or so. Gfx memory bandwidth is already higher now, so I don't get what you're asking for?
 
I mean that the current top of the line features more flops in pretty much every area, or will within a year or so. Gfx memory bandwidth is already higher now, so I don't get what you're asking for?
you were replying to an arrrse's statement about bandwidth between CPUs and GPUs, so I was wondering if there is any high end PC out there that can touch something like CELL<->RSX in this regard.
 
you were replying to an arrrse's statement about bandwidth between CPUs and GPUs, so I was wondering if there is any high end PC out there that can touch something like CELL<->RSX in this regard.

Oh, surely not yet. But I guess we'll see that soon as well. I meant the overall system performance when (theoretically) fully optimized.
 
Back
Top