SHHHHH! You're not supposed to tell anyone that!!!Chalnoth said:You didn't know that we here at B3D control the entire industry?
SHHHHH! You're not supposed to tell anyone that!!!Chalnoth said:You didn't know that we here at B3D control the entire industry?
It depends on what you mean with "some graphics work". If you're looking at a high-end gaming system then no, the CPU is likely never going to completely take over any stage of the graphics pipeline. Gamers will pay almost whatever necessary to guarantee the best visuals. And dedicated hardware will always be faster, even though the gap might be getting smaller.epicstruggle said:Just out of curiosity, would intel be able to use multi-core CPUs to compete against GPUs? There was some info released that Intel plans to release a 32 core CPU before the end of the decade, could they be used in desktop pc's to take over some graphics work?
Nick said:Once you meet certain resolution and framerate requirements with software rendering, the benefit of buying a graphics card diminishes
What I meant is software rendering on the CPU. It's certainly true that from a programming point of view GPUs are quickly getting almost as programmable as CPUs, but I don't think it changes anything to the fact that CPUs are becoming more capable of running 3D applications every generation. Of course we're not at the point yet where people stop buying graphics cards and have their CPU directly connected to the monitor cable. But gradually the integrated graphics chips might get surpassed by software rendering and their 3D functionality could shift to the drivers.trinibwoy said:This point is a bit blurry for me since much of 3D hardware processing nowadays is practically software rendering, just with a different ISA.
Nick said:But gradually the integrated graphics chips might get surpassed by software rendering and their 3D functionality could shift to the drivers.
I can see that too... leading to the inevitable possibility of Motherboard manufactures using cheap low quailty RAMDACs to save a few cents. Of course, I believe it would be more likely for those platforms the North Bridge would get an integrated RAMDAC/VGA core which would be used for Display and the GPU/Vector core would only be used for 3D Rendering/Vista. If it's done right, the system would still work without the Vector Processor, but just wont have any 3D Acceleration.Chalnoth said:I think it's more likely that when we do get integrated graphics on a CPU, we'll have a DAC on the motherboard that reads from system memory for display, rather than having the pinout for the VGA/DVI port straight on the CPU socket.
Right now, I think there's a major problem and that's the limited GPU programming model (DX9/10). IMO once GPUs are down to having just 10X or so brute-force performance advantage over CPUs, the generality of the CPU will become a significant factor, enabling non-incremental improvements that won't be at all possible with the limitation I mentioned.Nick said:What I meant is software rendering on the CPU. It's certainly true that from a programming point of view GPUs are quickly getting almost as programmable as CPUs, but I don't think it changes anything to the fact that CPUs are becoming more capable of running 3D applications every generation. Of course we're not at the point yet where people stop buying graphics cards and have their CPU directly connected to the monitor cable. But gradually the integrated graphics chips might get surpassed by software rendering and their 3D functionality could shift to the drivers.
I don't believe it's an absolute certainty. If Intel decides to keep increasing the performance of its integrated graphics chips, then there's little room left for CPU based rendering. If we look at the laptop market, performance/Watt is of primary importance, so it makes sense to keep having dedicated and efficient graphics hardware.Killer-Kris said:The only contention I have with that sentence is I would change might to will. Other than that, it's about as correct and concise of an answer as we'll get.
Who needs RAMDACs in 2010?Colourless said:...leading to the inevitable possibility of Motherboard manufactures using cheap low quailty RAMDACs to save a few cents.
Personally I don't expect DirectX to become deprecated. Don't forget that the design of DirectX 11 must have already started and Microsoft is in very close contact with all the big hardware manufacturers to balance features. There's little doubt that the trend for more programmability is going to continue. By the time CPUs are capable of doing some serious 3D processing, GPUs could very well have full scatter possibilities and a programmable setup engine, rasterizer and interpolators.Bastion said:There is also the matter of DirectX itself. We wouldn't want or need it for software rendering. You'd want to write your own software renderer, optimized and tweaked for the particulars of your game engine. By doing that, you'd be able to go way beyond the hardcoded, high-overhead DirectX feature set and the underlying SGI triangle rendering model.
I don't think it's really comparable. Sound processing (for games) is fairly simple and should take only a couple days. But writing your own renderer is complicated and math intensive. Avoiding numerical imprecision alone can be a nightmare, and implementing optimized anti-aliasing and anisotropic filtering isn't something you want to burden every game developer with.Interestingly, this already happened this generation with sound on the Xbox360 and the PS3. The only sound hardware in the system is the DAC, and all of the interesting sound mixing algorithms are done in software.
Please, do go and get started on that.Uttar said:Using CELL for Graphics doesn't make sense unless you use REYES imo, and don't even get me started on that...
It's a completely different rendering method, that is run on many CPU's, for starters.Alstrong said:What makes Cell good for Reyes
Chalnoth said:I've always thought Sweeney was a bit off in his predictions for future PC hardware. As an example, this cost him significantly with the original Unreal engine, which was designed for software rendering. He had properly anticipated the advance of CPU's, but had completely underestimated the progression of GPU's.
Edit: Note that I do have great respect for the guy, and think he's especially excellent for creating Unreal Script, but I think Carmack has always been much better at visualizing the future of gaming hardware.
Unreal was originally built around a software-rendering architecture. He saw the advancement of CPU's quite well, but vastly underestimated the advancement of 3D graphics hardware. In the end, this meant that the first Unreal engine had a very hard time running well on any 3D hardware. It was many months after release before a decent Direct3D renderer was written, longer before a good OpenGL renderer was available.Fox5 said:How did it cost him with Unreal, other than being a rather CPU dependent engine? (which the more modern doom 3 engine also seems to be for its time)
I don't buy it. Raytracing still has the nice property where every ray is independent of every other ray, and also would benefit from acceleration of some things that are currently accelerated in 3D graphics hardware, such as texture accesses. So it might not work well with current GPU's, but it may work well with future architectures.Maybe he was referring to raytracing, in which case raytracing hardware may not be much faster than a general purpose cpu? When will raytracing ever be a superior method though?
The fact it sucks at texturing, maybe? As such, there doesn't remain much of anything but REYES for it to be good at. And considering its focus on sheer ALU horsepower, it does seem quite appropriate - not that it'd favorly compete with the image quality of a modern GPU, mind you!Alstrong said:What makes Cell good for Reyes
The Unreal Engine 1 uses techniques like portals and polygon sorting to minimize overdraw and avoid the use of a z-buffer. This was very beneficial to a software renderer, but for hardware rendering it's better to do some rough/fast visibility determination and rely on the z-buffer. By the time the engine was rewritten for the first generation of harware, it wasn't ideal for the next generation. So the Unreal engines have always been known as rather CPU limited.Fox5 said:How did it cost him with Unreal, other than being a rather CPU dependent engine?
That "very very long time" is actuall pretty short: SwiftShader. At least for the software aspect...BTW, even if cpus do become powerful enough to rival gpus, there's still the software aspect. Unless microsoft rewrites Direct3d to run on cpus as well, it would be a very very long time before the market could move away from gpus.