ATI Multi-VPU up and running. . .

Not gaming machines, but yes up to 256 R3x0/R4x0 chips can work together in a supertiling config, and do so in assorted professional setups (see E&S or SGI).
 
and did anyone mention this inquirer article saying

ATI will enter this multiple GPU market very aggressively. The company can use unlimited number of cards thanks to its Super Tiling marchitecture. Remember the chess board that we mentioned the other day? Each card can render just one of the chess fields. The limit is 34 so far and I believe that the only limitation is PCIe lane numbers, currently limited to 24.

.
.
.

f someone makes a more complex board with more lanes you will be able to plug even more cards in and some ATI partners such as Asus or Gigabyte might just place two or more Radeon chips on the same board in order to make a quad based system.

So the fight won't just end with two cards rendering together, it can be extended to multiple GPUs and cards in the eternal fight of Nvidia versus. ATI. µ
 
E&S and SGI have doing this for years now, but has any individual (not corporation) ever built a setup to harness the power of more than two or four GPUs for the purpose of tiled rendering (not simulation, as seems the case with GPGPU projects)?

I have some massively complex scenes that I need to render atleast semi-interactively, and this seems to be the only option at the moment.
 
I think I may need to clarify a little bit on what I asked. I realize that ATI's been doing tiling for years, but my question was has anybody other than a large, graphics dedicated corporation ever put together a rack or group of rack systems of ordinary PCs simply to use dozens of GPUs together for the purpose of rendering a scene. How would you even go about doing such a thing? I'm figuring that almost everything I know about building a cluster is irrelevant in this kind of situation (minus hooking the things together and setting up some form of MPI).

Any info on this?
 
sunscar said:
I think I may need to clarify a little bit on what I asked. I realize that ATI's been doing tiling for years, but my question was has anybody other than a large, graphics dedicated corporation ever put together a rack or group of rack systems of ordinary PCs simply to use dozens of GPUs together for the purpose of rendering a scene. How would you even go about doing such a thing? I'm figuring that almost everything I know about building a cluster is irrelevant in this kind of situation (minus hooking the things together and setting up some form of MPI).

Any info on this?

uhm, where would you get the bandwidth needed if you want to render it in realtime?

raytracing in a farm, yes, but building an entire network of machines to render a scene frame by frame AND use traditional methods of interconnecting?
What do you think the big SGI machines are? they have a highly optimized backplane for sharing this graphics data and any other use would be just pointless.

there really is no need to build a cluster of 2 machines at $1000 each since there are video cards at $1000 that can do the same stuff...
 
That's what I was saying.. the bandwidth limitation is what's going to make my life a nightmare here in a week or two if we get any form of go-ahead. And it won't have to be simply two or four machines with two GPUs each, it'd have to be much more than that because our scene and display requirements for that scene are rediculous. It's huge, like in the region of ten to twenty million visible polygons per frame, @ 30 frames per second + high levels of AF and FSAA with environmental simulation on top of all that (our boss keeps saying "like those tech demos [ATI, NVidia], but bigger!!"). Can anybody else tell he's not a tech? Anyway, I really don't see it happening with our 40k$ cap or current tech as it is. Just getting a concensus on some other opinions.

Later
 
Interesting. Are you rendering at insane resolution? It's hard to have ten to twenty million visible tris per frame without it - 1600x1200 only has 2 million pixels, and you start running into geometry aliasing.

In geometry aliasing, since each tri is smaller than a pixel, you essentially sample random tri out of the mesh. If the surface is 'flat' enough that this doesn't matter, then the tris don't need to be that small - if you see what I mean. Of course, this is hard to achieve in a sufficiently flexible manner without higher order surfaces evaluated in hardware.

Shifting the balance from fill rate to geometry rate typically lowers bandwidth requirements on current generation chips.

If the average can be kept up at 2 pixels/tri - still very fine at 1600x1200, typically less than half a millimetre - that's 1 million tris which feels like it could be achievable at 30fps.
 
Back
Top