Someone mentioned gfx and audio combined on a card, elsewhere someone mentioned using shaders for audio.
Maybe someone can explain to me how that could work beyond simple sharing a slot for two devices?
I don't see how you'd get A3D like processing from the gpu, all you know is the current vertex to be transformed, it's not a room, you don't know about other rooms or obstructions, and you'd definitely need bounding boxes or you are going to waste processing time. I have no idea how you could guarantee real time audio performance if you have to wait for the gfx to be processed and that is jumping up and down from 30fps to 60fps. And I don't see any API that would make all that work, or one that would be available any time soon, for one vendor basically. Audio would definitely have to be the priority for processing, even if it was simple sharing of computational resources, again two separate devices that are just integrated not cooperating. Sounds like a solution looking for a problem.
Maybe someone can explain to me how that could work beyond simple sharing a slot for two devices?
I don't see how you'd get A3D like processing from the gpu, all you know is the current vertex to be transformed, it's not a room, you don't know about other rooms or obstructions, and you'd definitely need bounding boxes or you are going to waste processing time. I have no idea how you could guarantee real time audio performance if you have to wait for the gfx to be processed and that is jumping up and down from 30fps to 60fps. And I don't see any API that would make all that work, or one that would be available any time soon, for one vendor basically. Audio would definitely have to be the priority for processing, even if it was simple sharing of computational resources, again two separate devices that are just integrated not cooperating. Sounds like a solution looking for a problem.