will multicore take over 3d rendering?

Techno+

Regular
hi,

do u think multicore will take over 3d rendering, or will it be harder to program mulitocre processors?
Offline rendering already happens on multicore (pixar) but do u think multicore can also bring benefits to realtime rendering, or is the future of graphics only on GPUs ( or GPUs on die with CPUs)?

I need ur opinions

thanx
 
Quick note: multi-core just means you have more than one processing core on the same die, it's more of an indication of location than anything else.

GPUs could just as easily be multi-core, though the benefits aren't quite what they are for CPUs.

I think by multi-core, you mean multi-core CPUs, and it doesn't look cost-effective or physically possible connect the large number of CPUs needed to match a single GPU.

Depending on the design, you would need dozens to hundreds of CPUs to match a GPU on some tasks, and it's just not possible to fit that much on a single chip.
CPUs are mostly made of logic that isn't very useful for fast graphics.
 
Quick note: multi-core just means you have more than one processing core on the same die, it's more of an indication of location than anything else.

GPUs could just as easily be multi-core, though the benefits aren't quite what they are for CPUs.

I think by multi-core, you mean multi-core CPUs, and it doesn't look cost-effective or physically possible connect the large number of CPUs needed to match a single GPU.

Depending on the design, you would need dozens to hundreds of CPUs to match a GPU on some tasks, and it's just not possible to fit that much on a single chip.
CPUs are mostly made of logic that isn't very useful for fast graphics.

but if u cram 80 cores on a single chip to deliver 1 TFLOP , will this be possible to beat a GPU?
 
but if u cram 80 cores on a single chip to deliver 1 TFLOP , will this be possible to beat a GPU?

No, because they'll probably waste a lot of those flops on things the GPU does naturally. Unless the cores are set up to do filtering, AA, and special function hardware (and becoming mini-GPUs in the process) they'll have to emulate those functions.

That means what takes one instruction for a GPU may take several to dozens to hundreds of instructions on a CPU, which would cut performance significantly.

The CPUs also need to worry about CPU tasks the GPU doesn't, so the CPUs can't spend all their resources on graphics like the GPU can.
 
but if u cram 80 cores on a single chip to deliver 1 TFLOP , will this be possible to beat a GPU?
Maybe. And it will depends on what kind of content it is rendering.

Maybe in future games with very deep passes (~100), things will be more inclined to a more general processor.
 
Techno+: Why do you need r opinions? Are you writing up a research paper on the subject?

Perhaps I'm being rude, but I'd like to point out that someone in one of your other threads suggested that you read a book. This is quite good advice, because looking over all of the subjects you have posted thus far seems to indicate that you'd rather ask questions here than research answers yourself. Please, have the common courtesy to at least use wikipedia or the forum search here for the simple questions.
 
I thought GPU's already where multicore? (quads ect.)

They're a lot like multicores, which is why I said the benefits aren't that great for multi-core GPUs.

Each quad is mostly independent but still answers to the same master scheduler. The quads are all bound up within the same GPU core and share quite a bit of common infrastructure. A wholly multi-core solution would be something like taking two G80s and pasting them onto the same die.
 
Techno+: Why do you need r opinions? Are you writing up a research paper on the subject?

Perhaps I'm being rude, but I'd like to point out that someone in one of your other threads suggested that you read a book. This is quite good advice, because looking over all of the subjects you have posted thus far seems to indicate that you'd rather ask questions here than research answers yourself. Please, have the common courtesy to at least use wikipedia or the forum search here for the simple questions.

what kind of book should i read? and for your info i dont simply ask questions, i do some research on the web and then ask, beleive it or not.
 
Last edited by a moderator:
You are currently looking at one of the more exhaustive discussions of 3D on the internet, so you could start right here. Your questions about CPUs, GPUs, Cell, and so forth have been hashed out in excrutiating detail by the populace of Beyond3D. Yes, it may be rather arduous at times plugging away through a thousand-post thread, but if you really want to get down to gritty details, it's already here. With a bit of time and effort you can come to your own conclusions rather than soliciting simple blanket answers on subjects that are really much more complex.
 
You are currently looking at one of the more exhaustive discussions of 3D on the internet, so you could start right here. Your questions about CPUs, GPUs, Cell, and so forth have been hashed out in excrutiating detail by the populace of Beyond3D. Yes, it may be rather arduous at times plugging away through a thousand-post thread, but if you really want to get down to gritty details, it's already here. With a bit of time and effort you can come to your own conclusions rather than soliciting simple blanket answers on subjects that are really much more complex.

thanx for the help, and i do appreciate it thay you didn't use any harsh language
 
You are currently looking at one of the more exhaustive discussions of 3D on the internet, so you could start right here. Your questions about CPUs, GPUs, Cell, and so forth have been hashed out in excrutiating detail by the populace of Beyond3D. Yes, it may be rather arduous at times plugging away through a thousand-post thread, but if you really want to get down to gritty details, it's already here. With a bit of time and effort you can come to your own conclusions rather than soliciting simple blanket answers on subjects that are really much more complex.

To be fair, there's also a huge amount of misinformation out there as well. Even knowledgeable people can get things very wrong. After all there are plenty of people in my graduate level algorithms class that think CELL is the second coming of performance and can't seem to understand the implications of its architecture. I even know a few people with PhDs in EECS who have a gross misunderstanding of 3D graphics and as such think it's nothing more than moving around blocks of memory. They'll argue with you until they are blue in the face that a GPU is worthless and that a CPU can do the job better, and they are very educated individuals.

So Techno+, like flf said these questions have already been gone over many times, but only because they are fairly basic. But everyone needs to start somewhere, do your research, and if you still have questions you will have some more background to start with. :)
 
They're a lot like multicores, which is why I said the benefits aren't that great for multi-core GPUs.

Each quad is mostly independent but still answers to the same master scheduler. The quads are all bound up within the same GPU core and share quite a bit of common infrastructure. A wholly multi-core solution would be something like taking two G80s and pasting them onto the same die.

But you can remove quads, and the whole GPU will still function?
 
No, because they'll probably waste a lot of those flops on things the GPU does naturally. Unless the cores are set up to do filtering, AA, and special function hardware (and becoming mini-GPUs in the process) they'll have to emulate those functions.

That means what takes one instruction for a GPU may take several to dozens to hundreds of instructions on a CPU, which would cut performance significantly.

The CPUs also need to worry about CPU tasks the GPU doesn't, so the CPUs can't spend all their resources on graphics like the GPU can.


But would you say it's safe to say, that such a CPU would almost certainly replace IGPs?
 
But you can remove quads, and the whole GPU will still function?

No more than a CPU could function if you tore out all the ALUs.

But would you say it's safe to say, that such a CPU would almost certainly replace IGPs?
I don't see why an 80 CPU chip would be used as an IGP replacement.

It could do rendering, but it would probably be targeted mostly at markets that could live without IGPs anyway.
 
Last edited by a moderator:
It could do rendering, but it would probably be targeted mostly at markets that could live without IGPs anyway.

With Vista there really aren't going to be any markets that don't "need" an IGP. Which is where I was going with it replacing the IGP, it's necessary for basic functionality but is otherwise unneeded.
 
With Vista there really aren't going to be any markets that don't "need" an IGP. Which is where I was going with it replacing the IGP, it's necessary for basic functionality but is otherwise unneeded.

The desktop market isn't the only market out there.

If they're using something as non-standard as 80 mini-processors, they're probably not going to care about 3d desktop performance. If they don't default to a 2d desktop, they'll keep whatever separate GPU or IGP they had, so that they don't waste their cores on something like Aero Glass.

edit:
Assuming they use Windows, something a lot of markets just don't do, and will do less if Vista ever tried to require 3d functionality.
 
The desktop market isn't the only market out there.

If they're using something as non-standard as 80 mini-processors, they're probably not going to care about 3d desktop performance. If they don't default to a 2d desktop, they'll keep whatever separate GPU or IGP they had, so that they don't waste their cores on something like Aero Glass.

I think this is a lot along the lines of the discussion we had regarding why are CPUs built the way they are despite the fact most people don't need that type of performance. At which point don't you think that the 80 mini-core CPU will still be found on most everyone's desktop, regardless of whether they need one or not.

Also, what markets are you talking about where someone is going to need this type of performance and actually be sitting at the machine versus having it as a remote machine? And that's a genuine question, I'm not actually aware of any. Because I immediately think of render farms and super computers. Neither of which normally have a GUI in the sense most people would think of.
 
Last edited by a moderator:
I think this is a lot along the lines of the discussion we had regarding why are CPUs built the way they are despite the fact most people don't need that type of performance. At which point don't you think that the 80 mini-core CPU will still be found on most everyone's desktop, regardless of whether they need one or not.

I doubt it the manufacturers are going to target an 80 mini-core CPU at the desktop.
The performance profile on today's code would be horrible, today's code will be tomorrow's legacy software, and today's legacy software will be tomorrow's legacy software.

Future desktop applications aren't going to thread out 80 ways, and past about 4 CPUs, the benefits of more cores drops fast on most of the workloads the CPU must target.

An 80 mini-core desktop CPU would basically trash performance on tasks CPUs are traditionally good at, just to do a lousy job at catching up to the GPU or a passable job at replacing the IGP (so it can do badly at everything else).

Also, what markets are you talking about where someone is going to need this type of performance and actually be sitting at the machine versus having it as a remove machine? And that's a genuine question, I'm not actually aware of any. Because I immediately think of render farms and super computers. Neither of which normally have a GUI in the sense most people would think of.

Not many, which is why it wouldn't make much sense in targeting an 80-core CPU for IGP replacement. The two chips would exist in mostly separate realms.

Perhaps some engineering workstations could benefit from very parallel processors, which would mean they'd want to save as many CPU cores for the tasks they are running than have them worry about graphics.
 
Back
Top