Vista and SLI

Kombatant

Regular
I was reading a pretty interesting piece on Windows Vista (you can read it here) concerning its hefty hardware requirements. Here's something that caught my eye:
"Thirdly, the graphics card and system bus is essential. PCI x16 is going to be very important. Any of today's 3D GPUs will be fine… we're not waiting for some mystical monster that may or may not come out. But they need to have 128MB of RAM on it. If they've only got 64 don't panic.
Since gfx cards drop to PCIe 8x to get in SLI mode (at least that's my understanding, if I am incorrect, please slam me from here to *insert random faraway place here*), is there a chance that this would create problems?
 
Why wouldnt X8 work? I would bet money that its not an issue of bandwidth, and that's the only drawback of cutting the bus in half over two slots for SLI, correct?
 
IIRC the recommended minimum for Vista support is for AGP 8X, so PCI Express x8 should be fine. We'll probably see more 2x PCI Express x16 chipsets by the time Vista come out as well.
 
If Vista needed PCI-e x16 to get the best GUI effects out...then I think thats a sign of the apocalypse. There aren't any PC games out there that use the full bandwith of PCI-e x8 let alone x16. I know you can downgrade the GUI of Vista (to support lower end Video cards)....but saying PCI-e x16 is important? Whats that about?
 
I really really can't see Vista querying the root host for the link width and refusing to do something if it's less than 16. No chance....

Sounds more like a crappy way of saying "it'll want an arseload of bandwidth".
 
Skrying said:
Why wouldnt X8 work? I would bet money that its not an issue of bandwidth, and that's the only drawback of cutting the bus in half over two slots for SLI, correct?

I thought so too as well, but I read this quote which got me thinking (from the same article):
One of the things you'll notice about Vista beta 1 is that it runs dramatically quicker than Windows XP. The reason is the GPU is now doing a lot of work that the CPU used to have to do. There are a couple of gotchas though. The GPU needs a very high speed bi-directional bus to communicate with main memory. That has not been the case in the past, and what it means is that AGP will not be optimal. "The reason is that one of the things the LDDM can do is allow a video card to back stuff off into the PC's main memory if it has a particularly intensive task and needs the video RAM to work in. That's an intensely bi-directional type of communication.
 
Interesting, I didnt read the article, but I'll give it a look over now. Its rather shocking to me that it'd need that much bandwidth. So, lets say that the actual card is rather slow (X300SE for example, or one of the 6200TCs), how hard would this be on the GUI performance of Vista?
 
That's not shocking. This is shocking:
If you move from 32 to 64 bit, you basically need to at least double your memory. 2 gigs in 64 bit is the equivalent of a gig of RAM on a 32bit machine. That's because you're dealing with chunks that are twice the size… if you try to make do with what you've got you'll see less performance. But RAM is now so cheap, it's hardly an issue.
Either that guy's smoking something, or the entire Longhorn development team needs to be fired on the spot. That is the most stupid thing I have ever seen. For the love of god, if the 32-bit version can handle something with 32-bits, there is NO reason to use 64-bit integers and floats everywhere. You use them where you NEED them, stupid.

Gosh... I can't believe that's for real. Someone, PLEASE, tell me the guy's kidding. If it's true, the memory bandwidth costs will most likely imply that 32-bit will be faster than 64-bit on Longhorn. How ironic. The only use in hell they'd have for this is if they feared having overflow on every single of their variables, and thus just used 64-bit everywhere instead of fixing the actual problems because, hey, nobody cares if the 32-bit version crashes every five minutes... right?

I would suspect this to be a misinterpretation (or a joke, but the site *seems* serious), either by the presenter or the reporter. At least, I sure hope so, because I'd love to hear about the advantage of wasting half your RAM. Actually, pointers obviously always are 64-bits now. I didn't know Microsoft managed to program such an efficient platform that it requires 99% pointers, though... *shrugs*


Uttar
 
Uttar, this guy is kidding. 32 Bit vs. 64 Bit is the same story as the two XP versions.

BTW: Aero Class runs very well with my 8xAGP 6800GT but the nVidia driver is still to "Alpha" to use Vista for my daily work.
 
Afaik Aero Glass will be the fallback mode, with Aero Diamond (which MS hasn't demonstrated yet) being the "über candy" mode.
 
Uttar said:
Either that guy's smoking something, or the entire Longhorn development team needs to be fired on the spot. That is the most stupid thing I have ever seen. For the love of god, if the 32-bit version can handle something with 32-bits, there is NO reason to use 64-bit integers and floats everywhere. You use them where you NEED them, stupid.
He's smoking something. He's utterly failed to realize that going to 64-bit only doubles the size of instruction storage, not data storage. This means that your executables will frequently be twice the size, but data files won't. Given that data files are by far the majority of data that programs must have in memory, this isn't going to come close to doubling your memory requirements.
 
Chalnoth said:
He's smoking something. He's utterly failed to realize that going to 64-bit only doubles the size of instruction storage, not data storage. This means that your executables will frequently be twice the size, but data files won't. Given that data files are by far the majority of data that programs must have in memory, this isn't going to come close to doubling your memory requirements.

Actually 64-bit doubles the size of pointers.

The impact on code size is not much as there's less than 1 byte average increase per instructions and most instructions are 2+ bytes long. So I'd say less than 50% increase of code size.
The data size increase is depending on what the application stores, but I'd guess the increase should be way below 20%.
 
Kombatant said:
I was reading a pretty interesting piece on Windows Vista (you can read it here) concerning its hefty hardware requirements. Here's something that caught my eye:

Since gfx cards drop to PCIe 8x to get in SLI mode (at least that's my understanding, if I am incorrect, please slam me from here to *insert random faraway place here*), is there a chance that this would create problems?


I think this is dumbed-down PR-speak for "PEG is good". Note that there is no such thing as PEG x8, this is a PEG (PCIe x16) working with half the interface. So, in so many words, saying PCIe x16 is recommended is analogous to saying you should have a PCIe video solution because x16 is the official minimum. Dropping down to x8 most ikely won't be a problem and, if it is, Microsoft seriously needs to offer something tremendously valuable back for requiring all that power. As a side note, I am sure Intel et al have nothing against inflated requirements (at least in marketing and on the box) as this helps create demand for new system board configurations to replace those old, decrepit PCIe x16 ones (yuck!).

That said, I think the main focus is on the bi-directional nature of the PCIe interface and not so much the bandwidth to the video card, as is most often noted. Vista may use some 'interesting' functions that rely on reading data back from the video card and here PCIe is much better than AGP even if you discount the increased bandwidth summary (by summary I mean that it is mostly noted so-and-so many MBs/second to the video card. If AGP was market with more thorough numbers it wouldn't look so rosy on the box ;))
 
On the comment about going 64-bit and the associated memory requirements, this is obviously not right as has been mentioned before. However, the most important benefit of 64-bit computing for a home PC is the increased memory handling capability. Therefore, one can only hope that this will be leveraged and that Vista is 'optimized' to take advantage of large amounts of memory (4GB+). We are now on the boundary where people are starting to think about 2GB system RAM seriously, but this is mostly to satisfy some 'renegade' application that devours memory to no end (often unnecessarily methinks). Vista could, and hopefully will, take better advantage of system resources by caching the system and even applications in memory (perhaps predicting the user's behavior and preloading data into memory, although this can sometimes be a bad thing).

I say this mainly because I often think it is sad how PC performance is so heavily dominated by CPU specifications (marketing). If you look at many (most?) PCs they are underpowered beyond the CPU. Lots of RAM and an OS that uses it well would do wonders for smooth sailing. For example, consider the fact that when you play a game like Half-Life 2 you are still pulling data from the HDD as you play. Imagine the same CPU and GPU performance (that is, your maximum "frame rate" won't increase) but with the entire game in RAM. It should make it smoother and avoid the hitching we see when some games suddenly discover that they need to grab some more data from a compressed data archive somewhere. This, of course, has nothing to do with 64-bit beyond the ability to access large amounts of RAM, but that, in itself, can be a huge boon if it is taken advantage of.
 
Kombatant said:
Afaik Aero Glass will be the fallback mode, with Aero Diamond (which MS hasn't demonstrated yet) being the "über candy" mode.
There's a good chance that Diamod probably won't ship with the initial Vista release.
 
Dave Baumann said:
There's a good chance that Diamod probably won't ship with the initial Vista release.

I heard that Diamond is only for the Media Center Edition (Versions) of Vista.
 
Hyp-X said:
Actually 64-bit doubles the size of pointers.

The impact on code size is not much as there's less than 1 byte average increase per instructions and most instructions are 2+ bytes long. So I'd say less than 50% increase of code size.
The data size increase is depending on what the application stores, but I'd guess the increase should be way below 20%.
Well, I'm speaking from the experience of my own compiled code in Linux. My executables and memory usage (since for my current code, I store almost no data) are nearly double in size when compiled for 64-bit.

I don't see how this would be different under 64-bit Windows.
 
wireframe said:
That said, I think the main focus is on the bi-directional nature of the PCIe interface and not so much the bandwidth to the video card, as is most often noted. Vista may use some 'interesting' functions that rely on reading data back from the video card and here PCIe is much better than AGP even if you discount the increased bandwidth summary (by summary I mean that it is mostly noted so-and-so many MBs/second to the video card. If AGP was market with more thorough numbers it wouldn't look so rosy on the box ;))
I think the odds of this are somewhere in the vicinity of zero. If the GPU is performing GDI operations (so... compositing), why would it need to send data back to main memory for any reason? My guess is that this will have nothing to do with Vista the shell (or however you want to define it) at all, but will instead be a large part of DX10 or whatever it's eventually called.

The reason is that one of the things the LDDM can do is allow a video card to back stuff off into the PC's main memory if it has a particularly intensive task and needs the video RAM to work in. That's an intensely bi-directional type of communication.
What the hell does that even mean? Nothing, it seems, unless it's implying there's some sort of CPU GDI fallback for when you're running a fullscreen 3D app, for example.

Also:

"The downside is that all your existing flat panel monitors and projectors aren't going to work with high-def videos in Vista. Bad news."
Well, guess there aren't going to be any HD videos for Vista, then.
 
In the current driver model, resources can't be swapped out of Video Memory. There are 2 resource pools. There is the default pool and managed pool. Anything put into the default pool only exists in video memory. If the video memory is 'lost' (i.e. another application has exclusive ownership of the deivce) then the resource is lost. The Managed pool on the other hand holds a copy of the resource in System Memory so if the resource is lost, it can be reuploaded to the card automatically by the OS. This means resources in the Managed Pool take up both Video and System memory at the same time.

Vista changes this. In Vista there will only be a single type of pool. All resources will be placed into video memory if it is free. If there is not enough, the OS will either swap out lower priority resources into System Memory, OR place the new resources into system memory. If the resource is in System Memory and Video Memory becomes free Vista will then transfer the resource back into Video memory and free up the System Memory.

So quite simply, Vista needs a fast bus from the Graphics card to the CPU because when Vista swaps resources out of Video memory into System memory you want it as fast as possible.

It is very possible that fullscreen games running under Aero glass may have significantly longer startup times when used with AGP cards compared to PCIe 16x since the OS will need to swap out all the GDI elements from Video memory into System Memory while the game is using all the video memory itself.
 
It doesn't seem to me that game startup times would be impacted significantly. The AGP bus may be a bit slow on getting stuff back to system RAM, but it's still much faster than hard drives, and games still typically load much more data from the hard drive to system RAM than can be stored in video memory.

I think we'd be more likely to see issues with multitasking.
 
Back
Top