LucidLogix Hydra, madness?

I'm still wondering, with the supposed possibility of using both vendors in same machine working together, what are they planning on doing with the slight gamma difference both parties have (or at least used to have? not 110% sure of current situation)
 

Lol. That piece made me chuckle. Charlie has been right about VGA a lot lately, but when it comes to this... it just came from a place where the sun doesn't shine... A lot of speculation and pretty much off too...

Trinergy exists. We've had an early Trinergy engineering sample in the office for ages.... and I'm sure as hell certain that it wasn't 'photoshopped'. ;)

Big Bang Trinergy should be available before the end of the year. I've already seen allocation numbers, dates and prices... and furthermore we've got some interesting Big Bang related events coming up.
 
Trinergy exists. We've had an early Trinergy engineering sample in the office for ages.... and I'm sure as hell certain that it wasn't 'photoshopped'. ;)

Well that's easy, Trinergy isn't based on a fantasy chip :) What about the Hydra board, is that still in play or is Charlie onto something?
 
Well that's easy, Trinergy isn't based on a fantasy chip :) What about the Hydra board, is that still in play or is Charlie onto something?

Charlie didn’t even understand the technology behind this. He might have heard some rumors but could he put them in the right context without the technology knowledge?
 
Well that's easy, Trinergy isn't based on a fantasy chip :) What about the Hydra board, is that still in play or is Charlie onto something?

The Lucid Hydra board is still very much in play. Lucid is optimizing the driver for Windows 7 so that it works stable and in all configurations. As you can probably understand, the mix 'n match concept is an awesome idea, but MSI wants to deliver a product of high quality and especially one that's very stable. So when Fuzion has passed the MSI internal qualification assurance test, we will bring it to the market, which is most likely Q1 2010.
 
Last edited by a moderator:
Charlie didn’t even understand the technology behind this. He might have heard some rumors but could he put them in the right context without the technology knowledge?

Don't think he has any interest in understanding the technology. He spent the entire article building up the Nvidia conspiracy theory based on his guesswork and speculation with nothing on the tech itself. But to be fair, what technology is there to understand? There have been no disclosures at all on how it supposedly works.
 
It’s more that he doesn’t even know how GPUs works in general. Therefore he could not understand how Lucid needs to work.

“The idea is simple, the Hydra 200 looks at PCIe traffic going to a GPU and parses the DX calls to one GPU or another intelligently based on several algorithms.”

Parsing the PCIe traffic will get you nowhere as you will only see IHV specific commands there. You will never see any DX calls there. Since Vista the DX calls doesn’t even reach the kernel driver anymore.
 
Maybe he means that the chip decides on where Lucid's own data goes to (originating from it's driver)
 
The Lucid Hydra board is still very much in play. Lucid is optimizing the driver for Windows 7 so that it works stable and in all configurations. As you can probably understand, the mix 'n match concept is an awesome idea, but MSI wants to deliver a product of high quality and especially one that's very stable. So when Fuzion has passed the MSI internal qualification assurance test, we will bring it to the market, which is most likely Q1 2010.

Excellent news, I was worried it might never see the light of day, and while Charlie does like his hyperbole he's had a lot of coups lately so I feared the worst. If all this product does is force both ATI and Nvidia to improve their GPU scaling methods then I'd be delighted. If it works as well as advertised (and that's a mighty big if) then its definitely something I'd be interested in.
 
Maybe he means that the chip decides on where Lucid's own data goes to (originating from it's driver)

As any PCIe switch does based on PCIe data headers.

But as trinibwoy already said he seems not interested in the technical background at all.

I just noticed his total mistake as I had played around with a GPU command splitter in the past. It was for a video wall with a very high resolution and therefore you needed multiple GPUs to drive it. Something similar to AMDs new Eyefinity.

Excellent news, I was worried it might never see the light of day, and while Charlie does like his hyperbole he's had a lot of coups lately so I feared the worst. If all this product does is force both ATI and Nvidia to improve their GPU scaling methods then I'd be delighted. If it works as well as advertised (and that's a mighty big if) then its definitely something I'd be interested in.

Performances wise you can’t beat AFR if the game is AFR friendly. The only problems it causes are the somewhat higher latency and the so called micro stuttering. The second problem is solvable on the hardware level by adding some timer and synchronization logic.

Beside this anything else then AFR is very complicate to manage with modern rendering technologies. All this dynamic render to texture and post processing requires massive synchronization when done on multiple GPUs for one frame.
 
Performances wise you can’t beat AFR if the game is AFR friendly. The only problems it causes are the somewhat higher latency and the so called micro stuttering. The second problem is solvable on the hardware level by adding some timer and synchronization logic.

This brings up another interesting issue: the main (only?) marketing point for Lucid is heterogeneous multi-card, since if you want to do homogeneous you'd just use SLI or CF. What sort of micro-stuttering like effects will you get if you have 1 card that renders 20% faster than the other card? The only way I can think of to handle this is to delay the faster card so it behaves like the slower card.
 
Another problem regarding multi-vendor combos that popped at another forum - what do they plan to do with different AA patterns and AF quality for example?
There's just no real way to get separate vendors cards working together without problems
 
This brings up another interesting issue: the main (only?) marketing point for Lucid is heterogeneous multi-card, since if you want to do homogeneous you'd just use SLI or CF. What sort of micro-stuttering like effects will you get if you have 1 card that renders 20% faster than the other card? The only way I can think of to handle this is to delay the faster card so it behaves like the slower card.

Yes, AFR can only be stuttering free if all GPUs run at the same speed level.

Another problem regarding multi-vendor combos that popped at another forum - what do they plan to do with different AA patterns and AF quality for example?
There's just no real way to get separate vendors cards working together without problems

This problem can although occur with different chips from the same vendor.
 
The second problem is solvable on the hardware level by adding some timer and synchronization logic.
It doesn't really need any hardware, as long as the driver knows the frame completion times it can detect the stutter and block render calls to smooth out the instantaneous frame rates. The problem is that fixing microstuttering will always lower average frame rate ... ie. benchmarks.
 
Last edited by a moderator:
It doesn't really need any hardware, as long as the driver knows the frame completion times it can detect the stutter and block render calls to smooth out the instantaneous frame rates. The problem is that fixing microstuttering will always lower average frame rate ... ie. benchmarks.

They should simply add an option for it in the CP.
 
Technology aside, no matter how smart they are this thing will require extensive profiling and application specific tweaks. I'm not confident in their ability to beat AMD and Nvidia at application level support. Then there's the whole problem of product support when things don't work, are motherboard manufacturers going to be on the hook for that? Driver updates should be fun too.

Love to be proven wrong, just don't see it happening based on what's been disclosed so far.
 
Maybe they can throw in an option for turning CF off while they're at it? And an option for turning texturing optimisations off would be great too.
In other words: don't count on it.

You can turn off CF in CCC by disabling Catalyst A.I. ;)

But for the rest of your wishes I'm with you!
 
Back
Top