DX10 GPUs: Will we see more GPGPU features?

What solution will succeed on the PC for the problem of physics, animation, etc


  • Total voters
    58

Acert93

Artist formerly known as Acert93
Legend
It is becoming more and more appearant that DX10 and Xenos (aka R500, C1, ATI's Xbox 360 GPU) have significant overlap. This has had me thinking: What role will MEMEXPORT play in DX10?

As a backdrop, it is pretty clear that PCs are "weak" in floating point performance. PC gamers are more and more looking forward to advanced animation, completely destructible environments, and interactive worlds degrees of magnitude more intracit than what we are seeing now.

So far it does not appear Intel/AMD are too interested in solving this problem, at least not in the immediate future. Gamers are only a segment of the market and devoting precious silicon realestate to features that mainly benefit gamers does not seem to be high on their check list of priorities. If Intel's comments about integrated graphics and general attitude about the current state of performance (features > performance) we wont be seeing a solution from Intel/AMD in the next couple of years. Not to mention any changes they introduce are 1. slow to enter the market and 2. don't scale quick enoguh on a regular basis.

On the other hand we will be seeing Physics Processing Units in a couple months (Asus and BFG already are on tap to deliver these commercially). Yet it is hard to believe that a device that will be beneficial in a limited number of games, cost well over $200, and has limited usefullness (it basically works with a proprietary API) will gain significant market share. Discreet Sound cards are pretty much dead, so it is hard to see a PPU making serious headway into the market.

:mad:

So it seems like neither of the above are very appealing avenues for a solution to the problem. Yet one of the GREAT things about the PC is it adapts--and usually quite well.

So I turn to the GPU. The gamers who need high end graphics are the same ones who need high end floating point performance. This opens the door for NV and ATI to expand their products usefullness AND creep into the "general processing" market to a degree.

If we are to believe Xenos--with a flexible shader array and dynamic scheduling with effecient ALU use--can use MEMEXPORT to do more "mundane" general processing tasks, like physics, could we not see

1. DX10 embrace this model, and
2. ATI/NV put 2 GPUs on graphics boards

The tradeoff of 2 GPUs is amazing. For games with no-physics, both GPU cores ala SLI/Crossfire will work on shading. For games with physics one GPU core can do graphics and the other physics (or some arbitrary load balancing).

Mid range cards could include 1 GPU and 1 smaller GPU, and low end cards could be 1 GPU solutions where a single GPU would balance physics and graphics (or just graphics if the chip was not fast enough).

The market overlap is excellent AND the net result is effecient--no silicon is idle during gaming. Unlike a PPU that can pretty much ONLY be used for physics, a GPGPU could be used for graphics or a balancing act of Physics and Graphics.

This also answers the need for more floating point performance. It is becoming appearant that for graphics to advance *animation and *physics based interactivity are needed to match the fidelity of the static rendering. Having a pretty picture that is standing still is just not good enough anymore. Graphics, imo, really needs to broaden into a broader market.

So, what do you believe is the solution for the PC performance issue? Will GPGPU's and DX10 be the solution or is it something else going to appear to resolve this issue?
 
Last edited by a moderator:
I'm sure this will happen soon enough, but not until Havok or Novodex start coding for it. Not too many games do physics in house. So it will take one of the Physics middleware companies to get this moving.

Havok allready has good market share and will look pretty bad next to hardware accelerated physics from it's competitor. So It may be in their best interest to give ATI or Nvidia a call. Either way, I think physics will definately move to hardware in the near future. CPU's are at a dead end, we just don't know where the turn off is....
 
I think that in the short-term GPGPU will take care of it, but I think that later both CPUs and GPUs will be very similar (like Cell introduce to us), being diferent only in a few performace gains in some key aspects (eg rasterizind in the GPU).
 
Somehow, I don't really see Cell being too similar to GPU's.
But I see Cell to be more suited for doing physics or other generic FP intensive stuff than GPUs are.

If the architecture proves to be successful, we are likely to see it enter the PC world.

All IMHO of course
 
GPGPU could have a real future, but don't forget that Intel and AMD have suddenly started making dualcore processors for nearly the same price as a single core processor.

The problem of course is that you will have to make multithreaded code. But they will have to start doing that anyway, because lot's of people will buy dualcore cpu's in the next year.
And it doesn't really matter much, whether you have to create speciale code for GPGPU, or special code for a PPU, or have to spend extra time for multithreaded code.

I'm not so sure that a dedicated PPU such as Asus is making now, will be more powerful than a single high-end core in a P4 or Athlon64 for handling physics....


And a PC with a dualcore cpu would also be much easier to use for other tasks. (mpeg encoding) so I would rather buy another cpu, than a dedicated PPU.

And once you have taken the hurdle to use that second cpu/core, it is a small step to go further to adding another dual-core cpu on the mobo.
Such a machine has LOTS of cpu power. Wouldn't developers rather use that huge amount of flexibel cpu power, instead of being stuck to yet another dedicated piece of hardware?
 
This is a very provocative train of thought, one of the ways I could see it coming to fruition would be for either one of the big 2 graphics companies to buy a Physics middleware/hw company or for Microsoft to buy a company and push it into middleware.
Without a clear vision of where the industry is going, I don't see any of the individual players being able to make this sort of sea change happen, and I don't think Microsoft has the background to develop this fast enough - they'd have to acquire someone to make it happen.
With only nVidia or ATI trying this, they would risk being alienated from the industry with capable hardware but limited software support.
Intriguing thought though.
 
X1800 Series slides at TechPowerUp.com

"New Performance Architecture... Ideal for Physics and Data Parallel Processing"

It definately appears that ATI has their eyes set on their products being used for more than just a GPU, but also moving into the "physics accelleration" business. Seems ATI has hedged a few bets with Xenos and X1800 in regards to features and possible future applications. I don't know if much will come of this, but it is interesting and exciting!

:cool:
 
Acert93 said:
X1800 Series slides at TechPowerUp.com

"New Performance Architecture... Ideal for Physics and Data Parallel Processing"

It definately appears that ATI has their eyes set on their products being used for more than just a GPU, but also moving into the "physics accelleration" business. Seems ATI has hedged a few bets with Xenos and X1800 in regards to features and possible future applications. I don't know if much will come of this, but it is interesting and exciting!

:cool:

The cool thing is they have it built in from low end to high end. Physics on the GPU, I think are the next big thing.
 
http://www.tomshardware.com/hardnews/20051005_090950.html
Tomshardware.com said:
The real innovation however is just an idea at this time, but promises to have a major impact on the industry, if brought to reality. Heye mentioned that ATI plans to open the hardware architecture of the X1000 to allow third party developers to write non-graphics-related applications to run on the graphics processor. The company calls this feature "dynamic load balancing."

Compared to a Pentium 4 CPU, which delivers a floating point performance of 12 GFLOPs and a bandwidth of just under 6 GByte per second, a graphics processor is a calculation monster: According to ATI, an X1800 XT chip reaches 83 GFlops and 42 GByte per second. The full performance of a graphics may not always be needed - especially in dual-graphics environments - and users will be able to relocate processing power to other applications. According to ATI, these applications could include scientific applications such as fluid dynamics, but also entertainment-related functions such as physics or 3D audio processing. Similar features have been demonstrated by academic projects in the past on ATI and Nvidia platforms, but dynamic load balancing as described by ATI officials promises a whole new use of graphics processors.

The company expects GPU specific third-party API's to become common within a few years - with one of the most promising being physics processing: ATI believes that graphics chips provide enough power to cover the features that are currently promoted by Ageia. If ATI's vision comes true, Ageia's idea for physics board for every gaming PC may
This is what it must feel like to be Jawed. :cool:

Until today I would have never considered SLI or a dual GPU card. As this technology develops and some developers begin using it I am much more likely to consider such options. A PPU is not of much interest to me ($250 for a very limitedly used product... kind of like my Audigy 2 ZS). But a SLI/Crossfire setup (or even dual GPU on a single card) can be used in all games. If it allows GPU-Physics splot the work, if not then dedicate both to graphics.

Even better: Theoretically you could pick up an older GPU for cheap (or a low end model, like a $79 GPU (X1300)) and use it for such tasks. Or when you upgrade keep the old one for use for such tasks. And of course a top end GPU would be able to do graphics and some other tasks at the same time (even if you have to turn down some details).

I am digging this strategy a lot!
 
Yep. It seems to me Ageia has about 1 year to get credibility and mindshare in place before GPUs 2-3x faster than R520 are with us.

Jawed
 
Jawed said:
Yep. It seems to me Ageia has about 1 year to get credibility and mindshare in place before GPUs 2-3x faster than R520 are with us.
Wild hair... if the R580 has 3x as many ALUs (48 fragment shaders versus 16 in R520) this could be a big performance boost for stuff like this. Currently it does not look like there are a ton of Shader limited games, and even in games that are increasing the shader performance 3-fold seems kind of, lopsided? compared to the rest of the architecture.

I could be wrong, but with all those fragment shaders sitting around they would be PERFECT to pick up some extra parallel processing tasks.

Am I nuts or is that viable?
 
I think PPU/GPU-physics games are so far off that we're just gonna have to see what happens.

Watch the demos:

http://ati.com/designpartners/media/edudemos/RadeonX1k.html

to see if you think we can get those levels of graphics in any near-future games. I think devs are gonna lap-up the huge amounts of shader power that are coming - with XB360 naturally being the trail-blazer, short term, as it's the closed system and has most shader power (particularly when you account for dynamic branching).

Jawed
 
Back
Top