what benefits to 64bit?

Sxotty

Legend
Most articles talking about the amd 64bit foray imply that the only benifit is that instead of being limited to 4gb of ram you can have a lot more. But I mean who cares 4gb is plenty for most desktop users. Is there not other tangible benifits if 64bit drivers/operating systems are made? Will the AMD chip simply run twice as much data half as fast? I mean what is the real deal here?

If I recall correctly most game systems had a processor that was more than 32bit, GC=64bit, PS2 128 bit, and so on. This lead me to believe that there was a benifit and if so why are the reviews not discussing any of this stuff? Is it simply pie in the sky and since it is a future idea they don't bother mentioning something that only might exist?
 
Sxotty said:
Most articles talking about the amd 64bit foray imply that the only benifit is that instead of being limited to 4gb of ram you can have a lot more. But I mean who cares 4gb is plenty for most desktop users. Is there not other tangible benifits if 64bit drivers/operating systems are made? Will the AMD chip simply run twice as much data half as fast? I mean what is the real deal here?

Here's a good little article on this sort of thing: http://www.theinquirer.net/?article=11819
 
Other than >4GB address space and the benefits that it brings, 64-bit support as such can help in a number of situations:
  • Copying of data, e.g. using memcpy(), gets quite a bit faster as you move larger chunks of raw data at a time
  • Several encryption algorithms (such as, IIRC, RSA and AES) benefit greatly from having 64-bit arithmetic available; in particular a 64-bit multiply can result in 2x to 4x speedup over 32-bit.
  • Certain programming techniques that do "SIMD-within-a-register" can get a good speedup form 64-bit operations being available. There are implementations of e.g. strlen() that actually benefit from 64-it support.
Outside of raw data copying and 64-bit pointers, you generally need to code specifically for 64-bit operation to reap the full benefits. 64-bit support can also harm performance, due to pointer registers widening from 32 to 64 bits, gobbling up extra bandwidth and cache space.

As for GC/PS2, IIRC, 64 and 128 bits are widths of their SIMD registers, not the width of the general-purpose/pointer registers that are otherwise normally said to be the bitness of the architecture. As a comparison, Pentium3 and 4 (and AthlonXP and 64) have 128-bit SIMD registers for SSE operation.

As for the AMD64 architecture, you get the additional benefit (over plain x86) that you get larger register files and thus increased performance by ~0-30%, but this is really a separate issue from the 64-bitness of the architecture itself.
 
BZB thanks that was an interesting read. Have you read it, and do you think it is fairly accurate then? I was just wondering it sounds a little "too good" if you know what I mean, but I am looking forward imensely to next spring when I upgrade again.
 
Sxotty said:
BZB thanks that was an interesting read. Have you read it, and do you think it is fairly accurate then? I was just wondering it sounds a little "too good" if you know what I mean, but I am looking forward imensely to next spring when I upgrade again.

Yeah, it seems pretty sound to me. The main problem with the kind of 32 bitness we have now is that we find we can run out of numbers quickly.

For example, the whole issue with the maximum filesize is because you can't address a bigger number than you can count (assuming the OS can use the max addressing that the CPU can). That "biggest number" is limited, because you can't count higher than the number of bits you have to count with. With 64 bit, you can count a lot higher, and so can have a much larger file.

This is important for things like the next Epic engine, because their content creation tools will be hitting 32 bit limits. Likewise the range available to build your game world becomes larger.

A better example of what I'm trying to explain is the HDR stuff we're now seeing for Half-Life. The lighting needs a big enough range to do the overbrighting. There is a good example in the "Rendering with Natural Light" and "Car" demo available here: http://www.ati.com/developer/demos/r9700.html, which can show the difference both with and without.

Without that range, you don't have enough granularity to do subtle things, or in the case of 64 bit, enough bits to build a really big world without lots of tricks to fake it. 64 bit gives us more range, more granularity, which is the sort of general improvement that games could benefit from, as per the article I referenced.
 
If all you want is 64 bits arithmetic, MMX/SSE already provide many of them, including 64 bits add/sub, bitwise logic, shift (no rotate :( ), and 32 bits -> 64 bits multiply.

On many CPUs, memcpy with 64 bits move (FPU or MMX) is not going to be much faster, since CPU are too fast compared to memory.
 
Bouncing Zabaglione Bros. said:
For example, the whole issue with the maximum filesize is because you can't address a bigger number than you can count (assuming the OS can use the max addressing that the CPU can). That "biggest number" is limited, because you can't count higher than the number of bits you have to count with. With 64 bit, you can count a lot higher, and so can have a much larger file.
The current implementation of NTFS supports a file size of up to 2^44-2^16 bytes.

A better example of what I'm trying to explain is the HDR stuff we're now seeing for Half-Life. The lighting needs a big enough range to do the overbrighting. There is a good example in the "Rendering with Natural Light" and "Car" demo available here: http://www.ati.com/developer/demos/r9700.html, which can show the difference both with and without.
Is this not from using floating-point numbers, instead of integer? In which case, a 64 bit processor offers no inherent advantage.

Without that range, you don't have enough granularity to do subtle things, or in the case of 64 bit, enough bits to build a really big world without lots of tricks to fake it. 64 bit gives us more range, more granularity, which is the sort of general improvement that games could benefit from, as per the article I referenced.
Aren't coordinates almost always given as floating point?
 
Accord1999 said:
Aren't coordinates almost always given as floating point?

Yes. It would be a waste to use 64 bits integers for coordinates in most cases. A 32 bits FP provides 24 bits precision, which means if your smallest measure is 1 mm, the largest measure will be larger than 16 km.
 
pcchen said:
If all you want is 64 bits arithmetic, MMX/SSE already provide many of them, including 64 bits add/sub, bitwise logic, shift (no rotate :( ), and 32 bits -> 64 bits multiply.
The 64-bit add/subtract (PADDQ/PSUBQ) and the 32->64bit multiply (PMULUDQ) were added only in SSE2. For the multiply, a 32->64bit multiply that works on the general-purpose registers has been present since the 386 days; AMD64 adds a fast 64->128bit multiply.
 
arjan de lumens said:
For the multiply, a 32->64bit multiply that works on the general-purpose registers has been present since the 386 days; AMD64 adds a fast 64->128bit multiply.

Yes, but fixed to EDX:EAX for a 64 bits value is not the best thing to have. Most x86 compilers now use IMUL in 32 bits -> 32 bits forms which has more flexibility in register uses.
 
WRT AMD64, The performance gains quoted by most people (epic for example) I presume come from the extra registerers not the 64 bitness as such. This I feel is missed by loads of commentators who say things like "who needs more than 4gig of ram" as if this was the only benefit of AMD64. The immidiate gains come from the extra registers IMHO.

I also think that the RAM limitation even for home users, will come sooner than many expect, with games, and home video editing ect. I for one currently have 1 gig of ram and have done for some time, I presume many readers of this site do also. In Jan/Feb next year I intend to upgrade to an AthlonFX with 2 gig of ram (once it uses normal DDR) and I would think that within 18 months there will be home users wanting more than that and with these days of dual channel memory (and currently limited Mem slots) you seem to end up doubling up every time. Whereas a few years ago I had 192MB of ram and last year I had 640 MB ram Its not so easy to do now..

Ive started rambling a bit. But in two years time (or less) I expect that 2 gig per process will not be enough for even a lot of home users. There is no point in waiting for it to get to that stage before doing something about it!
 
Yes, if used wisely, the extra registers are probably good for 10% ~ 20% more performance, for most applications. 64 bits computation is good for some works (Huffman coding, for example) but not all applications.

Regarding to the size of memory... I believe that most home users have only 512MB or less RAM (all my three computers have 512MB RAM). 1GB or more are not common. Applications using more than 2GB RAM will be rare even two years from now.

By the way, I keep hearing people use video editing as an example for requiring more RAM. However, I don't see the reason. Most people edit with 720x480 @ 29.97 fps or 720x576 @ 25 fps. That means a second of uncompressed YUY2 data takes only about 20MB. With 2GB RAM, you can put more than one minute of uncompressed video in memory. I don't know why one would want to do that. I uses my puny 512MB RAM machine to edit compressed video more than 60GB, and I don't think more RAM will speed up much.
 
Back
Top