or am I smoking crack here? is this
1) possible
2) likely to gain performance
1) possible
2) likely to gain performance
How would data get from the app to the GPU? The GPU would have to be self-programming for a lot of things we do in the driver, that would add a lot of complexity. Also, how would you fix bugs in the HW/driver/app/API?or am I smoking crack here? is this
1) possible
2) likely to gain performance
A bigger question is what will happen in the future with multi-core CPUs? Quad-core is quickly becoming mainstream and Nehalem even runs eight threads simultaneously. Even the most intensive game is going to leave some CPU time unused.
Well I think a more interesting question is when will GPUs be used to design future GPUs? Maybe they already are but at the least it's coming.
http://www.hpcwire.com/hpc/2280791.html
Given that Larrabee is a Von Neumann architecture, wouldn't that allow to run pretty much the whole driver on the GPU, leaving only some simple interfacing for the CPU side?
It seems to me that Larrabee could be upgraded to DirectX 11 and beyond just by installing a new driver/firmware, whereas current 'classic' Harvard architecture GPUs can't even make the jump from DirectX 10 to DirectX 10.1.
What are the chances that NVIDIA and ATI would transition from a 'Harvard GPU' to a 'Neumann GPU' in the not too distant future? Would they be willing to potentially sacrifice performance for ultimate flexibility? In my personal view they could actually gain performance thanks to the added flexibility...
It might be possible to just send commands and all the related data straight to Larrabee, yes, but:Given that Larrabee is a Von Neumann architecture, wouldn't that allow to run pretty much the whole driver on the GPU, leaving only some simple interfacing for the CPU side?
In addition, I fail to see why a Harvard or Von Neumann architecture is somehow related to DX upgradability.
It might be possible to just send commands and all the related data straight to Larrabee, yes, but:
That's irrelevant to whether an architecture is Von Neumann or Harvard. The only real distinction between the two is that the Harvard architecture makes a physical distinction between instruction and data memory. Both can be made to be fully programmable, so a Harvard architecture design could just as easily be upgraded.I take his comment to mean something like this: being able to raster DX9-level graphics entirely in "software" is possible, even though the CPU core isn't graphics related at all. Sure, it's not fast, but you can "program" the DX level of that interface with new DLL's. Thus, the move to DX10 or DX11 would be a similar affair; performance would likely not be as good (depending on the situation) but the featureset would still exist -- simply by function of how "programmable" the underyling hardware is.