LucidLogix Hydra, madness?

Well Intel have just bought the company, so they must see some value in their IP.
 
OK, the podcast says a lot and nothing at the same time. This really does feel like a venture capitalist scam to me. The gentleman comes across as a used car salesman....

For example...he says "in crysis you maybe have 1000 tasks....if you have 1000gpu's then it is easy to give each gpu one task." Am I crazy or is this complete BS?

Anyone remember www.steorn.com .....
 
Surely parallelism is what gpus have been all about for quite some time, so that's not an inherently insane position to take --even if a wee bit oversimplified. Indeed, it's not all that different conceptually than the quote from David Kirk in my sig.

What if you had "a gpu" for each of the 4M pixels on the screen? :oops:
 
The first company that can leverage multiple GPUs in such a way that they can efficiently all render 1 frame together will have a HUGE leg up on the competition.

Regards,
SB
 
Yes, but isn't it the case that each GPU has no clue what the other is doing? As in the bandwidth is still a big problem and the whole point of the crossfire and sli bridges at the top of the card. So his theory that this will work with an AMD and Nvidia card simultaneously is very odd. But if you guys say this is possible, I'll believe...since I am like just floating on the surface here. Geo and the others are swimming down in the deep water with the big fishies ;)
 
The first company that can leverage multiple GPUs in such a way that they can efficiently all render 1 frame together will have a HUGE leg up on the competition.

This is why I posted this. What you describe seems to me to be the holy grail of MGPU computing. These guys seem to have come up with at least a step in that direction, but their website seems very lacking for details and such.

I totally understand if they don't want to divulge information for fear of copycats, I just find it amazing something like that can pop up out of (seemingly) nowhere.

I will definitely be keen on seeing how this plays out.
 
Interesting piece. Thanks for posting, bowman.

I'm a little disappointed after hearing the technical details, as its sure to introduce latency into the rendering pipeline, as well as CPU overhead. Hopefully both are negligible. Also sounds as though Lucid will have the same problem as GPU vendors - optimizing per game, except their workload will be doubled having to support both vendors. I almost feel as though they've bitten off more than any one company can hope to chew.
 
I'm just glad to see it powered up! So many people calling it vaporware.

I'm not worried about the CPU overhead, heck, the high-end CPUs of today are twiddling their thumbs while gaming at high resolutions anyway. Might as well put them to work, right?

They've got some clever folks working under their roof, maybe they can pull it off.
 
I'm a little disappointed after hearing the technical details, as its sure to introduce latency into the rendering pipeline, as well as CPU overhead. Hopefully both are negligible.

Unless they are completely full of it - which is a strong possibility IMO - both of those drawbacks are in fact negligible:
By essentially intercepting the DirectX calls from the game to the graphics cards, the HYDRA Engine is able to intelligently break up the rendering workload rather than just "brute-forcing" alternate frames or split frames as both GPU vendors are doing today in SLI and CrossFire. And according to Lucid all of this is done with virtually no CPU overhead and no latency compared to standard single GPU rendering.

Also sounds as though Lucid will have the same problem as GPU vendors - optimizing per game, except their workload will be doubled having to support both vendors. I almost feel as though they've bitten off more than any one company can hope to chew.
Agreed, and if they don't get this right I will avoid it just as I avoid SLI and Crossfire. Many of the games I play never make it into the popular benchmark suites. Then again, those games tend to run fine on my single 8800GT...
 
homerdog - they're just talking up their tech, of course they'll say it's negligible ;)

I hope for the best but expect the worst in this case.
 
If my understanding of that PCPer preview is correct, the system architecture looks something like this:

You run a game. Their "driver" intercepts the GL/DX calls, splits them up into "tasks", and then passes the "tasks" onto the ATI or NV driver. The ATI/NV driver sends commands down to the GPUs, through their bridge chip thing that might do something with the commands. Somehow magically stuff is rendered on the GPUs, comes together somewhere, and then gets displayed somewhere.

How on earth does any of that actually work? My only guess is that everything is done on the CPU and the hardware is just a fancy PCIe bridge chip? I can't believe it actually touches the command stream from the GPU driver to the GPU, since that would mean reverse engineering ATI's and NV's drivers and re-implementing them. However the article claims the chip does some of the splitting and load-balancing, meaning data would go their driver -> their chip -> their driver -> gpu driver -> their chip as bridge -> gpu.

And then there's the claims of sticking different series cards together. How on earth will that work if the 2 cards produce slightly differently rounded results for textures / geometry location / anything (obvious thing being anisotropic filtering algorithms changing). Imaging aliased edge crawl happening while you're standing still when rendering of the same triangle switches between GPUs.

Oh, and how about transparency? Or off-screen buffers used for shadow maps and reflections? Or post-processing?

Basically they come out with huge claims of ideal scaling and a million features, claim ATI and NV are complete idiots for how CF and SLI are currently done, and offer no real proof (it's not that hard hack up something to make UT not render its floor). Well CF and SLI work, and until these guys offer a real explanation of how the system would actually work I'm going to write them off as a venture capitol money pit.
 
Someone requestet white paper?
http://lucidlogix.com/technology/index.html

btw.
LucidLogix says this approach can solve many of the problems inherent in multi-GPU rendering today. First, you get better scaling. The test system we saw our live demo on contained two GeForce 9800 GT cards (basically identical to the 8800 GT in performance) and yet we saw Crysis running at 1920x1200 in DX9 mode, with all details cranked up, getting 45–60 frames per second all the time. That's far better than the usual SLI scaling. For three or four graphics cards, where SLI and CrossFire deliver diminishing returns, the scaling for a HYDRA-enabled system should still be almost entirely linear, provided the game isn't limited by CPU power
http://www.extremetech.com/article2/0,2845,2328497,00.asp
 
Last edited by a moderator:
Back
Top