why doesn't the Mobo merge with the gfx board?

Techno+

Regular
hi guys,

the title of this thread might be hard to understand so i'll explain it to you. As you all know most companies that make ATI and nvidia gfx mobos also make their grafix boards. what i thought of is why don't they simply merge both boards and at the same time save the manufacturing cost? for example, instead of manufacturing a gfx board with a R580 chip and 512 mb DDR3 memory and a RD580 chipset mobo separately, they can put the components of both the boards together so that we can have a RD580 chipset mobo with 512 MB and an R580 GPU together, while of course keeping the GPU and GDDR3 cooler in the sale box so that the user can install it manually. This mobo undoubtly will be only targeted at gamers, and the main use of this method is to decrease GPU to CPU bandwidth. So do u think this is a good method? I want ur opinions.
 
I think a few boards have tried this.

I think there was a SIS motherboard that had integrated the graphics card, but I don't think it was a high-end card.

The trouble is that motherboards are already very crowded, and you can't switch out the video card very easily.

Motherboard makers also tend to be very cheap.
 
Haven't you heard of integrated graphics? But the real issue is that such mainboards will be too hot and too crowded. Also, as gamer I won't buy such boards, because I want to have a ability to upgrade. You really have some funny ideas, Thechno+, first all this GPU to CPU stuff and now this... Why don't you read some books first? :-/
 
I think CPU's should be on a card with ram slots, while the GPU should get a nice socket. The GPU is fast becoming the dominant number cruncher... Is that crazy logic?
 
GPU's innovate on their external pinouts too much. You make a socketed mainboard with "256-bit" socket, and your 384-bit and 512-bit GPUs won't fit. Moreover, there is more control over signalling, RAM layout, heat dissipation, et al if you have everything on a discrete board.

This only makes sense IMHO in mobile markets.
 
I want them to try making a high-end graphics card that can be dropped in a CPU socket, like they want to drop in just the chips with Torrenza.

It'd skip PCI-Express, be cache-coherent, and allow for some local RAM one hop from the card's on-board memory.

Mostly I just want to see what kind of wacky mounting hardware it would have.
 
Last edited by a moderator:
There are a lot of problems with such a setup for higher end GPUs beyond regular IGP's:

1. GPU's are upgraded more frequently or at different times than motherboards are (especially for enthusiasts), so it can't be fixed

2. Making an upgradeable GPU socket would be tough because specifications change like memory speed/type/timings, memory bus width, power delivery, etc. which all affect the pin count

3. Making upgradable ram would be tough because removable modules wouldn't be able to run at the same speed (you can't solder your own BGA modules ;) ) and it's tough to upgrade things like bus width

4. You really don't save much (if anything) for fusing a high-end graphics board onto a motherboard, especially as a percentage of total cost. For ultra low-end graphics, going integrated saves a much larger percentage

5. (CNCAddict's suggestion) Number crunchers don't need a whole motherboard. CPU's need them because they control and connect to everything - hard drives, disk drives, USB, peripherals, etc.


So don't count on such a scheme ever happening. Integrated graphics will become more powerful (not relative to high-end, but relative to today), and that's about it.
 
Also, I don't think the high-speed "link" between CPU and GPU has really proven its value yet. We'll see if the RSX<->CELL link shows any benefit. Upload and readback have not been the bottleneck for games, and with GPUs enabling more and more GPGPU specific processing (especially CUDA's enabling of pointer-chasing/treat cache as memory heap approach), I don't think the CPU needs to do much except act as router for streams, prepping them, setting them up, transforming them, and uploading them. I think streaming and batching between CPU and GPU will work fine for most workloads. The idea of CPU/GPU pointer chasing on each other's turf, sharing coherent memory and effectively doing SMP between them seems to be an expensive and risky idea, and no proven gains at this point.

CPUs can be socketed because they don't depend on bus bandwidth and RAM as much as GPUs. But as mint said, how the hell will you install a "DIMM"-like GDDR4 module and run it at 1Ghz, and connect them up to GPUs whose bus sizes and memory channels are changing all the time?

GPU vendors won't like having their designs hamstrung by "standardized" sockets. It's bad enough that DX10 is pushing them towards commodification of features allowing them to compete mostly on price and speed. But standardizing the motherboard interface would also tie their hands with respect to implementation as well.
 
Man, I read this title and internally I went AIEEEEIIIEEE!! NO MORE PLATFORMIZATION YOU $%#$#!!!!

This was probably just me, however. ;) And I'm probably going to get rolled on the topic in general. Or rather we all are, as it seems to be picking up momentum rather than losing it.
 
What is exactly meant by merged with the motherboard? As in having its own socket much like the CPU? Possibly, but I have my doubts of that happening anytime soon. As in part of the motherboard? Its already happened, we've had integrated graphics for ages now. We will not see high end parts become part of the motherboard due to a number of major issues. The first being it'd simply be amazingly complex, pointlessly so. Secondly, the type of person laying down the money for such performance also wants the ability to upgrade each component individually, which this would prevent.
 
Techno+:

A little forethought can easily pinpoint problematic issues the with type of integration you are considering, but I get the impression you like to have people answer relatively obvious questions. Here are a few more questions to add to the mix.

  1. I swapped out the graphics board in one system three times in two years. How is this accomplished with an integrated system?
  2. SLI and XFIRE allow an enthusiast to add an additional (or perhaps multiple) GPUs into their workstation. How would this be accomodated with an integrated system?
  3. New GPUs often come along with new memory types on the PCB. How could you accomodate new memory types with an integrated system?
  4. What makes you think it would be all that much cheaper to jam all those components onto one PCB in the limited space available?
  5. Why would a gamer ever buy a motherboard that he can't mod, replace, or incrementally upgrade independantly from the graphics board?
  6. How does your suggestion, in any conceivable way, reduce the actual traffic between the CPU and the GPU?
 
Haven't you heard of integrated graphics? But the real issue is that such mainboards will be too hot and too crowded. Also, as gamer I won't buy such boards, because I want to have a ability to upgrade. You really have some funny ideas, Thechno+, first all this GPU to CPU stuff and now this... Why don't you read some books first? :-/

even my friends say that I have crazy ideas, and I guess without crazy ideas i won't be Techno+ lol.
 
My friends say the same about me, but you have to differ between crazy as in "freaky" and crazy as in "stupid";-)
 
As has been stated, GPUs integrated into northbridges are already the dominant form of graphics. AMD looks set to actually integrate the GPU onto the CPU package, and if worthwhile, Intel will of course do the same. It remains to be seen if the advantages are compelling though.
http://www.dailytech.com/article.aspx?newsid=4982

Personally, I have always been in favour of a slightly different take, where you still have high speed gfx memory but let it double as CPU cache/local storage, but that scenario requires a higher degree of platformization than is the norm on desktops today. OTOH, this could change, it's not as if the strong parties are adverse to the idea of selling a larger part of the whole widget.
 
"Platformization" is a dated concept (anyone remember the "multi-function device" craze of the '80's/early '90's--sound/fax cards, anyone?) which has far more to do with economics than it has to do with advancing the state of the art. I think that continuing separate but parallel and complementary design and development paths for cpus and gpus will be the norm for the foreseeable future. Economically, some of the older ideas still have merit, but I think that these concepts are often construed as technological developments instead of economical traditions, which I think is unfortunate. But, that's marketing for you.

While some notable people seem overly fond of making predictions about what will be state of the art a decade and sometimes even two decades from now, when according to some the cpu and gpu will have merged, I am far more cautious. I cringe at predicting two years out, let alone a decade or two...;) I mean, it's always interesting to think back to 1996 when the current predictions all had AMD going straight out of business, and no one was predicting that just three years later AMD would launch the K7 and that the entire face of the international cpu landscape would change drastically as a result (I will never forget John Carmack commenting that the K7 was just an x86 "architecture hack"--quote, unquote--that would be easily eradicated from the landscape by the superior MHz ramp that Intel was surely preparing to unleash.)

I think that when various notables from time to time make their "predictions" about what's coming up it should never be forgotten that these people are speaking from the conceptual constraints of their particular market disciplines. Carmack and the guys at Epic, et al, routinely speak from their own narrow fields of interests. What Carmack was actually saying when the K7 first began to trounce the Pentium in Carmack's own code, after he grudgingly began optimizing for it, was that "I'm hoping that Intel will eradicate the K7 through a MHz ramp because, gee, I have to do enough work in optimizing for one architecture and I sure don't like the idea of having to optimize for two!" Just take, too, the recent announcement by Newell and Valve about the challenges and payoffs in doing multi-threaded games, when just a couple of years ago Newell was verbose in listing his reasons why single-threaded games were here to stay.

So I think that what's really interesting about these kinds of predictions when they are made is not that they provide us with worthwhile predictions as to the state of future hardware, but that they provide us with insights into how and why the people making these predictions think as they do. It seems pretty clear to me, considering the stark differences between many of their opinions made yesterday and made today, on wide ranges of hardware topics, that these folks primarily concern themselves with following the state of hardware development, whatever it may be, a state which is dictated by persons other than themselves, and for reasons that don't necessarily primarily concern making life easier for game developers....;)
 
like what the others said...

That would really kill the whole upgrading thing...

some, if not all, tech enthusiasts upgrade more times than they would change their underwear :)

would also kill off their Xfire/SLI config...

unless, if what you are suggesting is that the same board also has 2 16XPCI-Express slots...

they are available...

just look at any comp being sold that has onboard VGA

but only on the low-end...AFAIK


EDIT...

although I just remembered AMD is planning on a CPU/GPU hybrid...
 
Back
Top