ATI stakes claims on physics

Discussion in 'Beyond3D News' started by marco, Oct 12, 2005.

  1. marco

    Regular

    Joined:
    Jan 30, 2002
    Messages:
    303
    Likes Received:
    0
    Location:
    The netherlands
    Techreport posted a small article on ATi and using their GPU's for physics.<blockquote>"<i>One of the more surprising aspects of ATI's Radeon X1000 series launch is something we didn't get a chance to talk about in our initial review of the graphics cards: ATI's eagerness to talk about using its GPUs for non-graphics applications.

    ATI practically kicked off its press event for the Radeon X1000 series with a physics demo running on a Radeon graphics card. Rich Heye, VP and GM of ATI's Desktop Business Unit, showed off a simulation of rolling ocean waves comparing physics performance on a CPU versus a GPU. The CPU-based version of the demo was slow and choppy, while the Radeon churned through the simulation well enough to make the waves flow, er, fluidly. The GPU, he proclaimed, is very good for physics work, and he threw out some impressive FLOPS numbers to accentuate the point. A Pentium 4 at 3GHz, he said, peaks out at 12 GFLOPS and has 5.96GB/s of memory bandwidth. By contrast, a Radeon X1800 XT can reach 83 GFLOPS and has 42GB/s of memory bandwidth.</a>"</blockquote>You can read the story <a href="http://techreport.com/onearticle.x/8887" target="_b3dout">here</a>.
     
  2. Ali

    Ali
    Newcomer

    Joined:
    Dec 16, 2002
    Messages:
    103
    Likes Received:
    3
    I hate to say it, but it looks like we may need someone like MS to step in with a API for physics. With a splinted API for developers to work towards, proper widespread use of PPUs will never get off the ground.

    I like the idea of Crossfire being useful for more than just graphics as well. could actually almost make it a worthwhile purchase.

    Why spend an extra £200 on a PPU thats only going to be used in 1 or 2 titles, when you can spend £300 on another GPU that can be used all the time. If not for physics, then for FSAA or a speed boost.

    Ali
     
  3. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    Its been pointed out a couple of times that MS has already been hiring for a "DirectPhysics" position, so an MS API interface may already be in the works.
     
  4. Nick

    Veteran

    Joined:
    Jan 7, 2003
    Messages:
    1,881
    Likes Received:
    17
    Location:
    Montreal, Quebec
    Soon we'll have quad-core CPU's capable of nearly 50 GFLOPS. Wouldn't it be wiser to use these extra cores effectively instead of using half the GPU's processing power?

    Anyway, I think the biggest problem is not the GFLOPS, but a good API and advanced visibility algorithms (PVS/portals are static). Current games already use a fine amount of physics, but to deform the scene you'd have to recalculate visibility efficiently which is a challenging problem that is engine specific.
     
  5. Carl B

    Carl B Friends call me xbd
    Moderator Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63
    Physics and other GPGPU processes: I'm all about them. But at the same time, can the modern video card really be viewed as that sort of an all-in-one solution? Even assuming the proper API is in place, with games like F.E.A.R. (set to launch in a week) already bringing the best of the best of current cards to their knees, I'm not sure that there will be much processing power left over for physics when rendering these games. And I think it stands to reason that there will be a strong correlation between increasingly high graphics and physics loads in upcoming games.

    I can understand the benefit in maybe some scientific applications and elsewhere, but in their primary market - games - it seems to me that either an extra CPU core or indeed a dedicated physics card may be the way to go.

    Maybe it would provide an additional benefit/feature for SLI or CrossFire though.
     
  6. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    Does it even need to be specifically for an SLI/Crossfire solution, though? Someone in the TR thread had a very good point - people replace their video cards fairly often, to upgrade for the latest games, but instead of throwing away your old board, how about using that as just a separate GPGPU/physics processor?
     
  7. Carl B

    Carl B Friends call me xbd
    Moderator Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63
    I saw that too, but to me the question is - how? I mean, I can't take an older card and put it in my board along with a newer one; there's just simply no slots. (unless we're talking about standard PCI-slot video cards, but...) What board has two PCIx16 slots for purposes other than SLI or CrossFire? And I'm still on AGP too!

    Now granted if board drivers are updated such that an SLI or CrossFire capable mobo could instead read one video card as a physics card and one as *the* video card, then I think we might be getting somewhere pretty interesting - and at the same time ATI and NVidia would give the consumer yet another reason to purchase these higher end boards.
     
  8. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,729
    Likes Received:
    5,821
    Location:
    ಠ_ಠ

    Well first, it's a matter of getting the motherboard manufacterers having those longer PCIE slots. As to how many lanes it should use, I'm not sure. I suppose worst case scenario is 32 lanes for SLI/Crossfire, another 8 for this GPGPU, and if there is still space left for a decent sound card... :wink: Such a computer could verily heat up my house in winter!

    Any problems with driver conflicts though? Maybe have dedicated GPGPU/physics drivers?
     
  9. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    These things are no going to happen overnight, we are still many years away from this becoming mainstream, but there are certainly plenty of ideas that can be played with. As with the physcialities of the boards, the beaty of PCI Express is that it can be easily configurable and devices can use the number of lanes that are actually connected - there has at least been one motherboard with an open ended 1x connector, so a 4x/8x/16x device can be physically inserted, but only one lane is used; this could be something that becomes more frequent with PCIe (and, IIRC, PhysX uses one lane so I don't see why a graphics board only calculating physics would require more than a PhysX board).

    A lot will need to be sorted out, but I'd say there is still plenty of directions PC's could take in the coming years (and I wonder how much Intel and AMD are taking note of these types of noises).
     
  10. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,729
    Likes Received:
    5,821
    Location:
    ಠ_ಠ
    ah... good to know, thanks :)
     
  11. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    70
    Indeed.

    It would not surprise me to see AMD and/or Intel come up with the next gen "math co-processor" like we had back in the pre i486 days. That is, have a separate socket on board that houses an optional floating point math chip, that has its own access to system memory.

    We know that Intel wou'd rather sell you those CPU cycles than have Ati / nVidia / Ageia do it.
     
  12. Carl B

    Carl B Friends call me xbd
    Moderator Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63
    Well, if Intel's long-term roadmaps are any indication, they may already be planning to do so, only in the context of non-homogenous multi-core chips. Intel and AMD may not be able to move fast enough to head off the GPU boys though as far as physics goes; the Intel 'specialized' core concept seemed set to debut in the 2010+ timeframe.
     
  13. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    70
    Right, but if Intel feels that Ageia et al may be making significant inroads "too soo", we may see Intel go with a dual chip route initially just to get in the game. Then, the integration with the x86 CPU would follow.

    Then again, as long as there is an "industry standard" way to expose PPU functionality (DirectPhysics for the Win platform, and maybe a different open standard for non Win platforms), Intel may be content to let others in the door...for a little while. ;)
     
  14. blakjedi

    Veteran

    Joined:
    Nov 20, 2004
    Messages:
    2,975
    Likes Received:
    79
    Location:
    20001
    Would that be similar to the VMX on PPC?
     
  15. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    Just wanted to note that Dave had the scoop on the wave demo a week ago:

    http://www.beyond3d.com/reviews/ati/r520/index.php?p=08

    Xbitlabs.com (here) had a similar article last week as well.

    It is interesting they mention Xenos because Xenos seems to have some extra logic (MEMEXPORT) that makes physics type processing one step closer: (here)

     
  16. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    Evidently the wave demo uses Render to Vertex Buffer as well!
     
  17. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    Just my 2 cents but

    1. GPUs are much better parallel processors than current x86 implimentations.

    2. GPUs, due to their parallel nature, are ramping up performance at a rate PC CPUs have been unable to match.

    3. GPUs have a bit more overhead compared to CPUs on these task. e.g. R520 has a theoretical peak GFLOPs for programmable shaders in the 180 range I believe, of which ~85 appear to be realistically available for physics. Using todays numbers, 12GFLOPs vs. 83GFLOPs, what would be best? Using 12GFLOPs on the CPU--and thus no AI or game code!--or take a 15% hit in graphics and dedicate those 12GFLOPs to Physics and STILL have the CPU open for game tasks? *Obviously* it is not as easy as that, but I think the point is valid: GPUs have a much higher cap on performance usable for physics.

    As for quad-core CPUs what do you mean by "soon"? 2007/2008? I have not seen them on the road map for 2006.

    Considering Xenos as 48 Shader ALUs and R580 appears to have 48 Pixel Fragment Shaders it is hard to not imagine flagship GPUs having, in late 2006 or early 2007, in the range of 96 ALUs (or more, and I would lean on more based on how well the 65nm and 45nm processes appear to be going and the fact ALUs are fairly small and the unified shader design seems to be a better way of pooling resources). By the time Quad Core CPUs come into the mainstream (2010?) we may very well see mainstream/low end GPUs with as many shaders as Xenos.

    And if ATI is serious about the GPU being a CPU vector co-processor (Which a USA could be very good at) we may see more designs with multiple GPUs on one board, SLI type setups, and even an emphasis on "Upgrade your GPU, but keep the old on in your PC and dedicate it to Physics!". I have a 6800GT. I am planning right now to get the refresh to R600/G80 if everything turns out good. I would be VERY happy if I could leave my 6800GT in my case and dedicate it to Physics! :cool:

    API is an issue. That is why MS is working on it. It may take a couple API revisions though before it becomes really usable BUT it looks like inroads are being made.

    My guess is that an API would be aimed to take advantage of GPUs, CPUs, PPUs, etc... and let consumers choose which hardware they want to accelerate the task.
     
  18. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    Hey Joe, how does the Ironman training going?

    The co-processor is an interesting idea BUT I see one big hurdle: Memory bandwidth. We are nearing the 50GB/s mark with GPU memory and with GDDR3 advacement and GDDR4 (and XDR2) in the wings it looks like over the next couple years we could be nearing the 200GB/s level. :shock:

    PCs have been pretty stagnant in memory bandwidth. DDR2 has quickened the pace a little from the "Ho Hum, 6.4GB/s dual channel DDR400" that has hung around for soooo long.

    GPUs, PPUs, and CELL, all of which are said to be decent designs for physics, share this in common: fast memory and strong vector/fp performance. I can see Intel tackling the math side, the bandwidth side may be a bigger hurdle.

    I would not imagine Intel to conceed the oppurtunity to NOT sell a device that can not only speed up games, but possibly aid Vista and have a world of use in Media applications (and gives a HUGE boost in the workstation/server market for scientific apps and render farms). But it is a big hurdle for them IMO.
     
  19. blakjedi

    Veteran

    Joined:
    Nov 20, 2004
    Messages:
    2,975
    Likes Received:
    79
    Location:
    20001
    I would imagine that the best one-two punch for an OEM vendor like Intel would be in the notebook/ handheld space. These days Intel is seeing significant design wins in the mid to low range CPU/GPU combination chipset space. There is the expectation that will continue over the next 12-18 months. Power, efficiency, performance will be the hallmark of their next significant refresh (the NAPA dual core notebook platform).

    http://www.eweek.com/article2/0,1895,1852853,00.asp
    "Napa, for its part, will allow notebooks to be designed as much as 20 percent smaller than current systems, Intel has said. Part of the reason is that Napa's chipset, the Mobile Intel 945 Express, consumes half a watt less power on average than its predecessor the mobile 915 Express. That's considered a fairly sizeable reduction in power consumption by chipset standards. The chipset will offer a higher performance built-in improved graphics and features such as hardware accelerated multi-streaming high definition MPEG-2 playback."

    Dual core, stronger GPU core, better power consumption.. Intel may have a few options available to them and increasing FP power within their CPU series may not be that distant an option IMO.
     
  20. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    70
    A bit OT...but since you asked... ;)

    First of all, it's not "Ironman", it's "sprint Triathlon". ;) Ironman is the specific "long distance" triathlon (3+ mi swim, 100 mi bike, 26 mi run), and I don't want to give anyone the impression that I'm training for THAT:!: (At least not yet. :twisted: )

    I've taken the last week and a half "off" to give my body a break as I was pretty religious with my training 6 days a week for the past 2+ months. At this point, I've managed to be able to swim several 100 meter repeats, still well short of the 800 meter all-at-once swim. But the progress is steady. I could barely swim 50 meters at all when I started.

    My running distance is up to 4 miles, running at about 8:15 / mile pace.

    I biked the actual courses that I'm going to run this spring, and put in a "respectable" time of about 53 minutes for the 16.5 mile course.

    My goal is to restart my training next week, and put in 2.5 months leading up to x-mas. My goal is to have the 800m swim completed by then, be able to run 5 miles at a 8 min/mile pace...and to work on improving my power using my bike trainer.

    Then post x-mas will be the "real" training session leading up to the races...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...