Paging File

akirru

Newcomer
Hi,

I have a gig of ram currently and XP Pro and I'm wondering what I should set the pageing file size too? Its currently what ever the default setting is :)

Many Thanks
 
the default seems to work fine... I played with it several times and never found any change from making it larger or a specific size or anything.
 
I ran several months with no page file on a gig of ram.
I only noticed because I was trying to get Iwar to work with Zeckensack's glide wrapper.

Turned out that you need to have your page file set to 512MB or less with 1GB of RAM to get Iwar to run in 3dfx mode with the wrapper.

Thats about the only time I've run into any issues.

WinXP can only address 2GB (2.5? 4?) anyway, so there would be 0 point having eg 10GB pagefile.
Perhaps with win x64 you might...
 
I go with the default system managed page file size...have yet to max out on the RAM usage...got 1 gig but I guess when I get Photoshop CS2 installed and work on my dslr pics and so on...I guess I am gonna up the ain ram to 2 gigs on my laptop and up the hard drive as well...just wanna try and minimize hard drive thrashing as much as possible
 
akirru said:
Hi,

I have a gig of ram currently and XP Pro and I'm wondering what I should set the pageing file size too? Its currently what ever the default setting is :)

Many Thanks

I have no virtual memory for XP Pro on 1GB.
Even when I'm running Gentoo in VMWare with a 412MB RAM-pool for itself.

Not that I do much RAM-demanding stuff, even with all the apps I use open I don't fill up 512MB.
 
Personally I have a fixed size 1 GB pagefile (I think... might be larger) to prevent fragmentation of the pagefile and other files due to variable pagefile size.
 
Hmm so Tokelil you are recommedning not to let the OS determine the size of the page file but to rather set it....as a certain limit...like min size 512 max size 1536..something like that?
 
arrrse said:
WinXP can only address 2GB (2.5? 4?) anyway, so there would be 0 point having eg 10GB pagefile.
The PHYSICAL memory limit is 4GB, but the virtual memory limit is much much larger and allows quite enormous pagefiles to be used, though of course it will be extrordinarily slow should the system start to run entirely out of pagefile memory...

Also, one would do well to set the pagefile to a FIXED size, preferably on an otherwise empty drive. This will cut down on harddrive seeking (which will limit performance greatly).
 
I have a manual setting of 768MB initial size and 1534MB maximum size on my secondary drive. Of course I don't see any benefits because of my shit motherboard.
 
Afaik 32bit windows, will normally only let you use 3GB of RAM for user processes, the rest is reserved for the OS.
 
cristic said:
Afaik 32bit windows, will normally only let you use 3GB of RAM for user processes, the rest is reserved for the OS.

It's a bit more complicated than that.

1. Memory is accessed in 4KB pages, giving you a largest total memory size that could be managed of 16TB, if your processor supports that. For example, an Opteron supports 48-bit virtual memory access.

2. The amount of memory that can be adressed within a single process is specified in segments. The maximum size of a segment is 4GB (32-bit). For ease of use, each process nowadays uses a set of code and data segments that map to the same location and are initialized to a size of 4GB. Although the physical starting point of those segments is often different for each process, they are all mapped at the start of their own 4GB segment(s).

3. For ease of use, Windows restricts the maximum amount of memory that can be managed. First to 2GB, so everything can be done with 32-bit integers (and all negative values are used for errors), with 2000/XP to 4GB. That way, the operating system can access all of it directly.

4. The OS reserves an amount of that adress space for it's own use. With Windows NT, this was half of it, with 2000/XP it is 1GB. So the largest amount of memory that can be used is still 4GB, but you can only use 3GB and the bit that is actually used by the OS. (Lunix/Unix do this differently).

5. If you use 32-bit integers to do adress calculations, you can only use half of the maximum amount. That leaves you with 2GB at max. And you only have to use a single integer by mistake on an obscure place to create really hard to track errors in all the memory above 2GB.

6. Due to all those limits, single structures (like files) should never exceed 2GB, even if theoretically possible.

7. Of course, if you use a 64-bit processor, you expand all that by orders of magnitude. But if you try to run a 32-bits program, it probably breaks down when you try to access a 64-bit value. That is the main problem with switching to 64 bits.

On the other hand, if you do a malloc(8589934592) on Linux64, it should just return you a pointer, even if you don't have more than 8GB of memory in your computer.

:D
 
you should leave the pagefile at default (put it on another partition if you have to), or set a size and forget it.

1GB RAM with no pagefile can be troublesome, you could run out of memory when gaming (happened to a friend, I hadn't seen a out of memory error for ages :))
 
If you don't game, or the games you run don't have a problem with it (test by trying), then I'd recommend turning off the paging file, gives far better performance than any other setting.

Apart from that you need to understand that anything cached in your RAM (files) will also be present in the page-file, this is also the reason it is always faster to use windows without it, so setting it to 1GB with 1 GB RAM will only make your system slower with no real benefit.

So if you want/need to use it I would recommend a fixed 2GB size, though in that case I would also consider just buying the extra RAM, they are real cheap anyway.
 
Btw, if you use much more virtual memory than you have RAM (it depends, but in general more than four times the amount of RAM), and you do run low on memory, you often get double page faults: those happen when the page index for that page itself is swapped to harddisk. That stalls everything for seconds at a time. So, whatever you do, don't make the page file larger than 3 times your actual memory (or 2GB with Windows, whatever comes first).

By disabling virtual memory you actually have much more RAM free to be used, as the OS will try to use most of it for buffers and caches to speed up harddisk access in the first place. So, any page file that is only half as big as your RAM size or smaller, is probably slowing things down quite a bit as well.

For Linux it doesn't matter very much and it is always a good idea to have a swap partition that is about twice your RAM, as everything is optimized on the fly. So it will try and do whatever is fastest at that moment.
 
Did any of you ever try to have Windows or a Zip program running on Windows, create a zip file larger than 2GB? It wil probably do so happily, but you won't be able to unzip it afterwards. :D
 
Sounds sort of like a hyperstudio project I made. It ran happily when we created it but crashed and burned when we tried to present it. Not even our resident guru who had a tendency to fix these types of problems could save it. well anyway we got an exemption on the project so it didn't really matter that much.
 
Back
Top