ATI & Nvidia investigating dual core GPU's

SA said:
In the long run it will be about providing the highest general purpose programmable floating point performance coupled with some specific rendering and display hardware.

CPU manufacturers have not targeted this market so will not likely be the vendors that solve the challanges it presents. General purpose, programmable, very high performance floating point will likely move to the GPU, since those companies are targeting the markets that require it.

To meet these needs in the long run, GPU manufacturers will likely adopt some the of techniques employed by CPU manufacturers, with the use of more dynamic logic, higher frequencies, on chip caches, general purpose registers, etc. The GPU is therefore likely to consist of several cores: a rendering core handing the raw rendering and display, the specialized graphics core, handling 3d graphics specific tasks optimized in hardware such as hidden surface removal, anti-aliasing, etc. and a general purpose set of high frequency floating point processors with a very large number of concurrent ALUs and their own on-chip cache. All this with a memory bandwidth architecture to match.

This is all pure speculation, but it seems to be the way things are heading.

Apologies for the long quote, but this echoes my own thoughts closely.
What I go on to wonder then is - where is the PC in this? Why would you tether the above hardware to a CPU running an office OS from its own (slow) memory, and communicating via a slow general interconnect with the simulation/visualization subsystem? Wouldn't it be better to simply build a specific box for the purpose and see to that it can connect to PC periherals and displays?

I just can't see that this is a direction the PC in general needs to evolve in, or even wants to. And I can't see that such a compute engine would be anything but hobbled by being dependent on a PC host, particularly if the evolutionary split widens as you seem to suggest, and the simulation part grows more capable of independent processing. Why not just make it completely independent? For games this would surely be the optimal.

* For some time, it has seemed inevitable to me that office PCs and media PCs increasingly diverge in their needs, and really should split into separate evolutionary branches. Is tacking on a co-processor card to an office PC really the model that we should follow in the future?

* Intel has both the CPU experience, the IP, and the fabs to build whatever silicon is needed. If it seemed as if the floating point coprocessor powerhouses SA describes above would become a major part of overall personal computing revenue, would they leave that revenue to be enjoyed its competitors uncontested? Wouldn't Intel at such a point introduce a new system architecture optimized for more media-centric computing? (Of course, they would only want to do this, and split the market if the media part got big enough. As of now they seem not to feel that its worth it. Oh, and I would tend to disagree about general purpose programmable floating point being the domain of gfx companies exclusively. The first Cell processor was largely designed by IBM. Supercomputing on one end, and DSPs on the other - there is a lot of high performance programmable floating point know how floating around, and referring to the GPU manufacturers feels very PC-centric.)

* Ultimately, Microsoft decides how the PC will evolve. What is their vision of computing for the future? Whereas Intel publicly extrapolates far into the future, Microsoft has been very silent as to where they would like to see the PC evolve.
 
:)

wireframe wrote:

I think the R520 has reached mythical proportions on forums because it has been bumped around in the roadmaps. I think R520 is nothing much like the R400 (which was "too advanced") that people have been following rumors about. I think people are just holding on to that "too advanced" too much. Maybe back then, but a lot has changed. This is not to say that I don't think R520 won't be a great piece of equipment. It just seems to me that people are expecting a little bit too much out of it straight away.


Hey as long as it's twice as fast as X850XTPE and it is SM3 and costs about $500 it'll be good enough for me.

I'm expecting NVidia to have a competitive part this summer, too. The mobile 6800Ultra Go (or whatever it's called) has received far too little attention so far. This part appears to offer truly sensational performance but there's been no explanation for why.

Jawed

As asked. take a look at the tomshardware benches, yes i know its tomshardware, but still numbers are numbers and its the only place i found comparisions to desktop versions keep in mind the g0 ultra is 110nm and is only 12 pipes working at 450/550.

image0064yq.gif


image0087xf.gif


image0094xn.gif
 
Re: :)

LordObsidian said:
As asked. take a look at the tomshardware benches, yes i know its tomshardware, but still numbers are numbers and its the only place i found comparisions to desktop versions keep in mind the g0 ultra is 110nm and is only 12 pipes working at 450/550.
Certainly impressive for a mobile part, but not that surprising actually. Keep in mind that the 6800GT has only 200MPix/s more fillrate and even 3.2GB/s less bandwidth. Also, they compare FW75.80 to 67.41, and a 2.13GHz Dothan beats that P4 @ 3.2GHz.
 
Most of you have probably already read the following from the GPU Gems whitepaper at NV's homesite (for the book):

Challenge: GPU Functionality Subsumed by CPU
(or Vice Versa)?

We can be confident that CPU vendors will not stand still as GPUs incorporate more processing power and more capability onto their future chips. The ever-increasing number of transistors with each process generation may eventually lead to conflict between CPU and GPU manufacturers. Is the core of future computer systems the CPU, one that may eventually incorporate GPU or stream functionality on the CPU itself?
Or will future systems contain a GPU at their heart with CPU functionality incorporated into the GPU? Such weighty questions will challenge the next generation of processor architects as we look toward an exciting future.

Can I take SA's notes a step further and ask (again) if his speculation can also be seen (in relative terms) as a SystemOnChip? I asked a similar question a while ago (if there was a reply and I missed it my apologies) and I was thinking even of eDRAM in such a package (mostly for cache functions).

Would one theoretically actually need a CPU with a GPU such as proposed by SA (or something similar) in the future? (rather for more modest demands of course).
 
trinibwoy said:
991060 said:
Why would nvidia want to split the GPU to 2 parts in the first place?

I have no clue about process tech but maybe two 300 million transistor cores are easier to manufacture than a single 600 million transistor core. Complexity is reduced considerably in the case of dual-core cpu's so maybe the same goes for gpu's.

Think yields.
 
Think yields.
I'm assuming that by this, you mean... take a dual-core half-billion transistor GPU and if there are defects in one of the two cores, you just disable it and sell the thing as a "value" chip?

Sometimes I wonder if you can't just do that with all the separate pipelines as it is. When you look at a CELL PE as something outstandingly revolutionary, it's easy to forget that GPUs are already designed much the same way -- a centralized control logic and a whole bunch of little independent pipelines.

What I would like to know, though, is if this sort of thing can mean rendering multiple contexts concurrently. For instance, rendering shadow maps from entirely different lightsources on the two different cores. Or rendering a low-res and a high-res version of the scene concurrently for depth-of-field. Well, this is more of an API-related question I suppose, but would there one day be a construct for issuing stuff to the separate cores independently, or would it simply be treated like a bigger single GPU?
 
I'm assuming that by this, you mean... take a dual-core half-billion transistor GPU and if there are defects in one of the two cores, you just disable it and sell the thing as a "value" chip?


Or replace the single defective core


Sometimes I wonder if you can't just do that with all the separate pipelines as it is. When you look at a CELL PE as something outstandingly revolutionary, it's easy to forget that GPUs are already designed much the same way -- a centralized control logic and a whole bunch of little independent pipelines.

It is done now, how do you think I turned my 12 pipe 6800 STD into a 16 pipe card?
 
_xxx_ said:
Who is SA or what's so special about him (her?) posting?

They are generally very good posts with with som kind of smart cleverness. I wouldn't say that SA posts anything revolutionary but SA seems to bring out stuff people hadn't thought about and i have always assumed that this is becouse SA's wide and deep knowledge in 3D-tech.

I don't have a clue how SA is.
 
rendezvous said:
_xxx_ said:
Who is SA or what's so special about him (her?) posting?

They are generally very good posts with with som kind of smart cleverness. I wouldn't say that SA posts anything revolutionary but SA seems to bring out stuff people hadn't thought about and i have always assumed that this is becouse SA's wide and deep knowledge in 3D-tech.

I don't have a clue how SA is.
Does it really matter?
SA knows his stuff, and sometimes takes a broader look at issues, getting away from the myopia of the immediate. Being exposed to someone who can both see deeper and wider than themselves tends to stump people, so the threads where he contribute tend to be short.

This time around though, he made a projection into the future which is always a shaky business. He makes perfect sense if you extrapolate where gfx-chips have been going. He may even have insider knowledge of actual product plans of at least one of the gfx-companies involved. But while I recognize him as having deeper knowledge than most here in the field of 3D-tech, what would make sense as a direction for gfx in and of itself may not fit into the bigger picture still - how does that extrapolation fit with where people want their computers to go? And for that matter industry giants Intel and Microsoft?

Which was what I tried to draw him on.
Those questions are determined by market and industry forces rather than technology per se, and thus more difficult for a technologist to adress authoratively.
Leaving space for speculation, which is what this site thrives (festers?) on.
 
Back
Top