NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
Eyefinity does add a bit more to it than just 3 displayed and a single virtual display.

For the current shipping cards it also adds 2x1, 1 monitor configurations. Where you can have 1 virtual display of 2 monitors and another display.
If I recall correctly (and a simplistic search of Google seems to support this), Parhelia did indeed also support (2 x 1) + 1 setup.

Now of course, I very much doubt the supported aggregating the displays across multiple display devices, and it doesn't look like they supported rotation of the displays. I also found the resolution limit, no single display could be above 1280x1024. I don't know if that was a DX issue, or just a technological issue, or a driver issue, or who knows what.

I think this puts Eyefinity back in the evolutionary stack rather than revolutionary. Not that it's a bad feature at all, I'm glad to see it back honestly. I'm also excited to have the option to have my dual displays get the full start menu across both devices in Windows without having to go get secondary software.
 
This rumour of NVIDIA working with Nintendo on the next DS has been going around for ages.
 
Parhelia did indeed allow a single virtual display surface to be stretched across two or even three monitors. That included the windows desktop (complete with your start menu across all display devices) and included games doing the same.

So to that point, Matrox did indeed beat ATI to the "Eyefinity" punch. I do believe there were some resolution limits, but can't remember what they were and I'm not immediately finding mention of it online. I'm sure Google has it somewhere...

The resolution limit is pretty irrelevant because of the age difference. That said, Matrox did seem to be the first, or one of the first to do it. There were some oddball cards that did similar things, I put a few in with Bloomberg systems for my clients, and they likely did it as well. Far from mainstream, but there were other options to do the same thing.

-Charlie
 
Nicely done, AMD.
Charlie, I sure hope you will write about AMD's anti-Fermi FUD campaign soon.

These slides look like something that Dave Baumann @ AMD came up with (or at least had a hand in). Hilarious :D

Regarding AMD/ATI's claim that NVIDIA pre-announced their new architecture, that is not entirely correct. NVIDIA pre-announced details of their new architecture that are relevant to the HPC market. There are a lot of mission critical applications in play there. Knowing some details about a revolutionary new architecture may actually be a good thing for the people in these industries :) On the other hand, very little information was released by NVIDIA regarding improvements/enhancements relevant to the gaming market. It's not like they put all their cards on the table and showed their full hand, not even close. Anyway, to expect or anticipate that GF100 will be a flop for gaming seems quite a trip, especially given NVIDIA's previous history of what they come up with after a lackluster generation of cards, and given the strength of the competition. If anything, I would expect GF100 to be hugely successful with gaming. And some of the new features developed for Fermi will pay huge dividends in enhancing the gameplay experience (such as the ability to have highly playable framerates when enabling advanced PhysX effects in-game..go look at some of the YouTube videos of the PhysX effects in Batman Arkham Asylum, and it's very clear that they really add to the gaming experience). Time will ultimately tell I guess, but that's my story, I'm sticking to it, and I simply can't wait to say "up yours, Chuck" ;)
 
Last edited by a moderator:
What extra "CUDAness"?

Jawed

Two big ones:

1. Shared Memory (yes it's used to hold operands for interpolation but that's not its primary purpose)
2. Going scalar, and the extra overhead that comes with it. We've seen in the past few days examples of CS code that required explicit vectorization to take full advantage of VLIW hardware. At the same time you've put in a lot of work demonstrating VLIW's higher efficiency in game shaders.
 
We've seen in the past few days examples of CS code that required explicit vectorization to take full advantage of VLIW hardware.
While this is true for now the compiler could join 2 or 4 threads of low ILP workloads to take full advantage of VLIW hardware...
 
I am not talking about speed. I am talking about merging 4 threads (each being pretty branchy) into a single thread to extract ILP in the compiler. Somehow, I don't think that is going to be a piece-of-cake...
 
I am not talking about speed. I am talking about merging 4 threads (each being pretty branchy) into a single thread to extract ILP in the compiler. Somehow, I don't think that is going to be a piece-of-cake...
That's stuff better left as a run-time (hw or sw..) process
 
That's stuff better left as a run-time (hw or sw..) process

Even if it's dynamic, it still implies that instructions could be issued from a variable number of threads/work-items per clock. That sounds like a further step beyond the already ominous DWF. And doesn't that completely break VLIW anyway?
 
Status
Not open for further replies.
Back
Top